The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
Artificial Intelligence in the Education Sector Market Size 2022 : Share and Trend, Growth Strategies with Revenue, future Scope, Analytical Overview…
Posted: March 31, 2022 at 3:12 am
Artificial Intelligence in the Education Sector Market Research Report audits the development drivers and the momentum and future trends. The Artificial Intelligence in the Education Sector Market Size consists of various players. The organization profiling of the above Artificial Intelligence in the Education Sector Market players has been done in the report comprising of their business outline, financial overview and the business techniques took on by the organizations with Forecast Period 2022-2028.
Global Artificial Intelligence in the Education Sector Market Research Report 2022-2028 by Players, Regions, Product Types and Applications
The Global Artificial Intelligence in the Education Sector Market covers express data with respect to the development rate, market estimates, drivers, limitations, future based demand, and income during the forecast period. The Global Artificial Intelligence in the Education Sector Market Size consists of data accumulated from numerous primary and secondary sources. This data has been checked and approved by the business examiners, thus providing significant insights to the researchers, analysts, managers, and other industry professionals. This archive further aides in understanding business sector patterns, applications, determinations, and market challenges.
The world has entered the COVID-19 Global Regaining period. In this complex economic environment, we distributed the Global Artificial Intelligence in the Education Sector Market Growth, Status, Trends and COVID-19 Impact Report, which gives a short examination of the global Artificial Intelligence in the Education Sector market.
Final Report will add the analysis of the impact of COVID-19 on this industry.
To Understand How Covid-19 Impact Is Covered in This Report Request a Sample Copy Of The Report
Who are some of the key players operating in the Artificial Intelligence in the Education Sector market and how high is the competition 2022?
Company Information: List by Country Top Manufacturers/ Key Players In Artificial Intelligence in the Education Sector Market Insights Report Are:
The ability of the computer program to imitate the human intelligence needed for the task is termed as artificial intelligence (AI). Integration of the artificial intelligence in education sector creates revolution through its result driven approach.According to our latest research, the global Artificial Intelligence in the Education Sector market size will reach USD million in 2028, growing at a CAGR of % over the analysis period.Global Artificial Intelligence in the Education Sector Scope and Market SizeThis report focuses on the global Artificial Intelligence in the Education Sector status, future forecast, growth opportunity, key market and key players. The study objectives are to present the Artificial Intelligence in the Education Sector development in North America, Europe, China, Japan, Southeast Asia, India and Central and South America, etc.
Sample PDF of the report at https://www.marketgrowthreports.com/enquiry/request-sample/20190218
Global Artificial Intelligence in the Education Sector Market 2022 Research Report is spread across105 pagesand provides exclusive vital statistics, data, information, trends and competitive landscape details in this niche sector.
Type Coverage (Market Size and Forecast, Major Company of Product Type etc.)
Application Coverage (Market Size and Forecast, Different Demand Market by Region, Main Consumer Profile):
The report covers the key players of the business including Company Profile, Product Specifications, Production Capacity/Sales, Revenue, Price and Gross Margin Sales with an exhaustive investigation of the markets competitive landscape and definite data on vendors and thorough subtleties of elements that will challenge the development of significant market vendors.
Enquire before purchasing this reporthttps://www.marketgrowthreports.com/enquiry/pre-order-enquiry/20190218
Some of the key questions answered in this report:
Geographical Segmentation and Competition Analysis
Get a Sample PDF of report https://www.marketgrowthreports.com/enquiry/request-sample/20190218
With tables and figureshelping analyze worldwide Global Artificial Intelligence in the Education Sector Market Forecast provides key statistics on the state of the industry and is a valuable source of guidance and direction for companies and individuals interested in the market
Major Highlights of TOC:
Major Points from Table of Contents:
Global Artificial Intelligence in the Education Sector Market Research Report 2022-2028, by Manufacturers, Regions, Types and Applications
1 Study Coverage
1.1 Artificial Intelligence in the Education Sector Product Introduction
1.2 Market by Type
1.2.1 Global Artificial Intelligence in the Education Sector Market Size Growth Rate by Type
1.3 Market by Application
1.3.1 Global Artificial Intelligence in the Education Sector Market Size Growth Rate by Application
1.4 Study Objectives
1.5 Years Considered
2 Global Artificial Intelligence in the Education Sector Production
2.1 Global Artificial Intelligence in the Education Sector Production Capacity (2016-2028)
2.2 Global Artificial Intelligence in the Education Sector Production by Region: 2016 VS 2022 VS 2028
2.3 Global Artificial Intelligence in the Education Sector Production by Region
2.3.1 Global Artificial Intelligence in the Education Sector Historic Production by Region (2016-2022)
2.3.2 Global Artificial Intelligence in the Education Sector Forecasted Production by Region (2022-2028)
3 Global Artificial Intelligence in the Education Sector Sales in Volume and Value Estimates and Forecasts
3.1 Global Artificial Intelligence in the Education Sector Sales Estimates and Forecasts 2016-2028
3.2 Global Artificial Intelligence in the Education Sector Revenue Estimates and Forecasts 2016-2028
3.3 Global Artificial Intelligence in the Education Sector Revenue by Region: 2016 VS 2022 VS 2028
3.4 Global Top Artificial Intelligence in the Education Sector Regions by Sales
3.4.1 Global Top Artificial Intelligence in the Education Sector Regions by Sales (2016-2022)
3.4.2 Global Top Artificial Intelligence in the Education Sector Regions by Sales (2022-2028)
3.5 Global Top Artificial Intelligence in the Education Sector Regions by Revenue
3.5.1 Global Top Artificial Intelligence in the Education Sector Regions by Revenue (2016-2022)
3.5.2 Global Top Artificial Intelligence in the Education Sector Regions by Revenue (2022-2028)
3.6 North America
3.7 Europe
3.8 Asia-Pacific
3.9 Latin America
3.10 Middle East and Africa
4 Competition by Manufactures
4.1 Global Artificial Intelligence in the Education Sector Supply by Manufacturers
4.1.1 Global Top Artificial Intelligence in the Education Sector Manufacturers by Production Capacity (2022 VS 2022)
4.1.2 Global Top Artificial Intelligence in the Education Sector Manufacturers by Production (2016-2022)
4.2 Global Artificial Intelligence in the Education Sector Sales by Manufacturers
4.2.1 Global Top Artificial Intelligence in the Education Sector Manufacturers by Sales (2016-2022)
4.2.2 Global Top Artificial Intelligence in the Education Sector Manufacturers Market Share by Sales (2016-2022)
4.2.3 Global Top 10 and Top 5 Companies by Artificial Intelligence in the Education Sector Sales in 2022
4.3 Global Artificial Intelligence in the Education Sector Revenue by Manufacturers
4.3.1 Global Top Artificial Intelligence in the Education Sector Manufacturers by Revenue (2016-2022)
4.3.2 Global Top Artificial Intelligence in the Education Sector Manufacturers Market Share by Revenue (2016-2022)
4.3.3 Global Top 10 and Top 5 Companies by Artificial Intelligence in the Education Sector Revenue in 2022
4.4 Global Artificial Intelligence in the Education Sector Sales Price by Manufacturers
4.5 Analysis of Competitive Landscape
4.5.1 Manufacturers Market Concentration Ratio (CR5 and HHI)
4.5.2 Global Artificial Intelligence in the Education Sector Market Share by Company Type (Tier 1, Tier 2, and Tier 3)
4.5.3 Global Artificial Intelligence in the Education Sector Manufacturers Geographical Distribution
4.6 Mergers and Acquisitions, Expansion Plans
5 Market Size by Type
5.1 Global Artificial Intelligence in the Education Sector Sales by Type
5.1.1 Global Artificial Intelligence in the Education Sector Historical Sales by Type (2016-2022)
5.1.2 Global Artificial Intelligence in the Education Sector Forecasted Sales by Type (2022-2028)
5.1.3 Global Artificial Intelligence in the Education Sector Sales Market Share by Type (2016-2028)
5.2 Global Artificial Intelligence in the Education Sector Revenue by Type
5.2.1 Global Artificial Intelligence in the Education Sector Historical Revenue by Type (2016-2022)
5.2.2 Global Artificial Intelligence in the Education Sector Forecasted Revenue by Type (2022-2028)
5.2.3 Global Artificial Intelligence in the Education Sector Revenue Market Share by Type (2016-2028)
5.3 Global Artificial Intelligence in the Education Sector Price by Type
5.3.1 Global Artificial Intelligence in the Education Sector Price by Type (2016-2022)
5.3.2 Global Artificial Intelligence in the Education Sector Price Forecast by Type (2022-2028)
6 Market Size by Application
6.1 Global Artificial Intelligence in the Education Sector Sales by Application
6.1.1 Global Artificial Intelligence in the Education Sector Historical Sales by Application (2016-2022)
6.1.2 Global Artificial Intelligence in the Education Sector Forecasted Sales by Application (2022-2028)
6.1.3 Global Artificial Intelligence in the Education Sector Sales Market Share by Application (2016-2028)
6.2 Global Artificial Intelligence in the Education Sector Revenue by Application
6.2.1 Global Artificial Intelligence in the Education Sector Historical Revenue by Application (2016-2022)
6.2.2 Global Artificial Intelligence in the Education Sector Forecasted Revenue by Application (2022-2028)
6.2.3 Global Artificial Intelligence in the Education Sector Revenue Market Share by Application (2016-2028)
6.3 Global Artificial Intelligence in the Education Sector Price by Application
6.3.1 Global Artificial Intelligence in the Education Sector Price by Application (2016-2022)
6.3.2 Global Artificial Intelligence in the Education Sector Price Forecast by Application (2022-2028)
7 Artificial Intelligence in the Education Sector Consumption by Regions
7.1 Global Artificial Intelligence in the Education Sector Consumption by Regions
7.1.1 Global Artificial Intelligence in the Education Sector Consumption by Regions
7.1.2 Global Artificial Intelligence in the Education Sector Consumption Market Share by Regions
7.2 North America
7.2.1 North America Artificial Intelligence in the Education Sector Consumption by Application
7.2.2 North America Artificial Intelligence in the Education Sector Consumption by Countries
Excerpt from:
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence in the Education Sector Market Size 2022 : Share and Trend, Growth Strategies with Revenue, future Scope, Analytical Overview…
Zoomd Announces the Acquisition of Artificial Intelligence Marketing Platform "Albert" – PR Newswire
Posted: at 3:12 am
VANCOUVER, BC, March 28, 2022 /PRNewswire/ -- Zoomd Technologies Ltd.(TSXV: ZOMD) (OTC: ZMDTF) and its wholly-owned subsidiary Zoomd Ltd. (collectively, "Zoomd" or the "Company"), the marketing tech (MarTech) user-acquisition and engagement platform, today announced its acquisition (the "Transaction") of Albert Technologies Ltd. ("Albert") on March 27, 2022. Albert is a U.S.-based artificial intelligence marketing platform for advertisers, driving fully autonomous digital campaigns for some of the world's leading brands. The consideration for the Transaction payable by Zoomd is a combination of cash and shares paid on March 27, 2022, being the closing date, and a future share-based earn-out payment, based on meeting certain criteria.
Albert processes and analyzes audience and tactical data at scale, thereby autonomously allocating budgets and optimizing creative and evolving campaigns across paid search and social media. Albert's value proposition to its clients is to ease the complexities of scaling, primarily using the Google and Facebook platforms, by executing campaigns at a pace and scale that were generally not previously possible. By autonomously combing through mass amounts of data, converting this data into insights, and autonomously acting on these insights, across channels, devices, and formats, Albert eliminates the manual and time-consuming tasks that generally limit the effectiveness and results of modern digital advertising and marketing.
"While we are also releasing some of our products onto a Self-Service and SaaS business model, Albert enhances our efforts immediately, with additional solid offerings that cover branding and awareness needs. Furthermore, we view Albert as complementary for mobile apps, particularly with regards to our future plans relating to Web3." said Ofer Eitan, Zoomd CEO, adding "we view M&A activity, which includes industry professionals, supplementary technology and solid customer base, as a part of Zoomd's growth objective. This acquisition shows our ambition to provide our partners a SaaS platform for scaling with minor efforts. Albert's team is a group of extremely talented veterans that fit Zoomd's culture. They have a number of Fortune 500 customers that will now be able to use our products and services. We are happy and excited to have the team come on board."
Or Shani, Founder and CEO of Albert commented: "We are excited to join Zoomd, a fast growing company in the marketing technology space. We believe that our business, based on our unique, patented and proven technology, will further accelerate given the great scale and financial strength of Zoomd."
For the purposes of the Transaction, the share component of the consideration will be valued at the higher of (i) the closing price of the shares on the date prior to their issuance and (ii) US$1.00 per share. Zoomd did not assume any of Albert's debt and no finder's fees were paid or are payable in connection with the Transaction. All shares to be issued pursuant to the Transaction are subject to the prior approval of the TSX-V.
About Zoomd:
Zoomd (TSXV: ZOMD, OTC: ZMDTF), founded in 2012 and began trading on the TSX Venture Exchange inSeptember 2019, offers a site search engine to publishers, and a mobile app user-acquisition platform, integrated with a majorityof global digital media, to advertisers. The platform unifies more than 600 media sources into one unified dashboard. Offering advertisers, a user acquisition control center for managing all new customer acquisition campaigns using a single platform. By unifying all these media sources onto a single platform, Zoomd saves advertisers significant resources that would otherwise be spent consolidating data sources, thereby maximizing data collection and data insights while minimizing the resources spent on the exercise. Further, Zoomd is a performance-based platform that allows advertisers to advertise to the relevant target audiences using a key performance indicator-algorithm that is focused on achieving the advertisers' goals and targets.
Neither TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the Exchange) accepts responsibility for the adequacy or accuracy of this release.
DISCLAIMER IN REGARD TO FORWARD-LOOKING STATEMENTS
This news release includes certain "forward-looking statements" under applicable Canadian securities legislation. Forward-looking statements include, but are not limited to, statements with respect the successful closing of the Transaction and the future success of Albert, Zoomd's future ability to successfully continue its growth, its ability to continue to deliver products and services largely unimpacted by the privacy updates undertaken (or will be undertaken in the future) by Google and Apple as well as its ability to continue expanding into new geographies and industries. Forward-looking statements are based on our current assumptions, estimates, expectations and projections that, while considered reasonable, are subject to known and unknown risks, uncertainties, and other factors that may cause the actual results and future events to differ materially from those expressed or implied by such forward-looking statements. Such factors include, but are not limited to: general business, economic, competitive, technological, legal, privacy matters, political and social uncertainties (including the impacts of the COVID-19 pandemic and the current war in Ukraine), the extent and duration of which are uncertain at this time on Zoomd's business and general economic and business conditions and markets. There can be no assurance that any of the forward-looking statements will prove to be accurate, as actual results and future events could differ materially from those anticipated in such statements. Accordingly, readers should not place undue reliance on forward-looking statements. The Company disclaims any intention or obligation to update or revise any forward-looking statements, whether because of new information, future events or otherwise, except as required by law.
The reader should not place undue importance on forward-looking information and should not rely upon this information as of any other date. All forward-looking information contained in this press release is expressly qualified in its entirety by this cautionary statement.
FOR FURTHER INFORMATION PLEASE CONTACT:
Company Media Contacts:Amit BohenskyChairmanZoomd[emailprotected]
Website: http://www.zoomd.com
Investor relations:Lytham Partners, LLCBen ShamsianNew York | Phoenix[emailprotected]
SOURCE Zoomd Technologies Ltd.
Read the original here:
Zoomd Announces the Acquisition of Artificial Intelligence Marketing Platform "Albert" - PR Newswire
Posted in Artificial Intelligence
Comments Off on Zoomd Announces the Acquisition of Artificial Intelligence Marketing Platform "Albert" – PR Newswire
WEF publishes toolkit on artificial intelligence and kids – Western Standard
Posted: at 3:12 am
The World Economic Forum (WEF) published a report titled Artificial Intelligence for Children a toolkit to enable various stakeholders to develop trustworthy artificial intelligence for children and youth.
Children and youth are surrounded by AI in many of the products they use in their daily lives, from social media to education technology, video games, smart toys and speakers. AI determines the videos children watch online, their curriculum as they learn, it says in the reports introduction.
The WEF toolkit was created by a team of academics, business leaders, technologists, and youth leaders. Its purpose is to enable the business sector to create ethical, responsible, and trustworthy AI to support parents, guardians and youth to navigate the AI environment safely.
The toolkit includes a tool for business called the C-suite. It provides actionable frameworks with real-world support to help companies design innovative and responsible AI for young people.
Many companies use AI to differentiate their brands and their products by incorporating it into toys, interactive games, extended reality applications, social media, streaming platforms and educational products. With little more than a patchwork of regulations to guide them, organizations must navigate a sea of privacy and ethics concerns related to data capture and the training and use of AI models. Executive leaders must strike a balance between realizing the potential of AI and helping reduce the risk of harm to children and youth and, ultimately, their brand, the report says.
The C-suite checklist guides business on topics such as full disclosure, systemic bias, age-sensitive user validation, and privacy. These areas are where businesses often fall short in a fast-evolving field where regulatory frameworks struggle to keep up.
The WEF report calls for an AI labelling system that would put a QR code on the box of every product incorporating AI potentially used by children. It would inform consumers, especially parents and guardians, the nature of the contents and possible considerations.
The QR code would share information about the product such as the age-appropriateness; whether it uses a microphone, or camera; whether the product can connect with other users on the internet; if it employs facial or voice recognition or gathers data.
The topic of AI and children is not a subject that has been taken lightly since the advent of AI. In 2017, MIT published a discourse on AIs impact on children and childhood called Hey Google, is it OK if I eat you?
Autonomous technology is becoming more prevalent in our daily lives. We investigated how children perceive this technology by studying how 26 participants (3-10 years old) interact with Amazon Alexa, Google Home, Cozmo, and Julie Chatbot. [The] children answered questions about trust, intelligence, social entity, personality, and engagement. We identify four themes in child-agent interaction: perceived intelligence, identity attribution, playfulness and understanding. Our findings show how different modalities of interaction may change the way children perceive their intelligence in comparison to the agents. We also propose a series of design considerations for future child-agent interaction around voice and prosody, interactive engagement and facilitating understanding, the MIT reports abstract said.
MIT raised concerns at the time that unregulated toys already on the market using AI and an internet connection were and continue to be a serious concern in terms of invasiveness and privacy.
Already, the Internet of Toys is raising privacy and security concerns. Take Mattels Aristotle, for instance. This bot, which is like an Amazon Echo for kids, can record childrens video and audio and has an uninterrupted connection to the internet. Despite the intimate link Aristotle has with young children, Mattel has said that it will not conduct research into how the device is affecting kids development. Another smart toy, the interactive Cayla doll was taken off the market in Germany because its Bluetooth connection made it vulnerable to hacking MIT said.
The WEF report concludes by warning stakeholders that AI products come with risks and benefits and that the risks are especially concerning in relation to their use by children.
Amanda Brown is a reporter with the Western Standardabrown@westernstandardonline.comTwitter: @WS_JournoAmanda
Read more here:
WEF publishes toolkit on artificial intelligence and kids - Western Standard
Posted in Artificial Intelligence
Comments Off on WEF publishes toolkit on artificial intelligence and kids – Western Standard
Really alarming: the rise of smart cameras used to catch maskless students in US schools – The Guardian
Posted: at 3:12 am
When students in suburban Atlanta returned to school for in-person classes amid the pandemic, they were required to mask up, like in many places across the US. Yet in this 95,000-student district, officials took mask compliance a step further than most.
Through a network of security cameras, officials harnessed artificial intelligence to identify students whose masks drooped below their noses.
If they say a picture is worth a thousand words, if I send you a piece of video its probably worth a million, said Paul Hildreth, the districts emergency operations coordinator. You really cant deny, Oh yeah, thats me, I took my mask off.
The school district in Fulton county had installed the surveillance network, by Motorola-owned Avigilon, years before the pandemic shuttered schools nationwide in 2020. Out of fear of mass school shootings, districts in recent years have increasingly deployed controversial surveillance networks like cameras with facial recognition and gun detection.
With the pandemic, security vendors switched directions and began marketing their wares as a solution to stop the latest threat. In Fulton county, the district used Avigilons no face mask detection technology to identify students with their faces exposed.
Remote learning during the pandemic ushered in a new era of digital student surveillance as schools turned to AI-powered services like remote proctoring and digital tools that sift through billions of students emails and classroom assignments in search of threats and mental health warning signs. Back on campus, districts have rolled out tools like badges that track students every move.
But one of the most significant developments has been in AI-enabled cameras. Twenty years ago, security cameras were present in 19% of schools, according to the National Center for Education Statistics. Today, that number exceeds 80%. Powering those cameras with artificial intelligence makes automated surveillance possible, enabling things like temperature checks and the collection of other biometric data.
Districts across the country have said they had bought AI-powered cameras to fight the pandemic. But as pandemic-era protocols like mask mandates end, experts said the technology will remain. Some educators have stated plans to leverage pandemic-era surveillance tech for student discipline while others hope AI cameras will help them identify youth carrying guns.
The cameras have faced sharp resistance from civil rights advocates who question their effectiveness and argue they trample students privacy rights.
Noa Young, a 16-year-old high school junior in Fulton county, said she knew that cameras monitored her school but wasnt aware of their hi-tech features like mask detection. She agreed with the districts now-expired mask mandate but felt that educators should have been more transparent about the technology.
I think its helpful for Covid stuff but it seems a little intrusive, Young said in an interview. I think its strange that we were not aware of that.
Outside Fulton county, educators have used AI cameras to fight Covid on multiple fronts.
In Rockland regional school unit 13 in Maine, officials used federal pandemic relief money to procure a network of cameras with face match technology for contact tracing. Through advanced surveillance, the cameras, made by California-based security company Verkada, allow the 1,600-student district to identify students who came in close contact with classmates who tested positive for Covid-19.
At a district in suburban Houston, officials spent nearly $75,000 on AI-enabled cameras from Hikvision, a surveillance company owned in part by the Chinese government, and deployed thermal imaging and facial detection to identify students with elevated temperatures and those without masks.
The cameras can screen as many as 30 people at a time and are therefore less intrusive than slower processes, said Ty Morrow, the Brazosport independent school districts head of security. The checkpoints have helped the district identify students who later tested positive for Covid-19, Morrow said, although a surveillance testing company has argued Hikvisions claim of accurately scanning 30 people at once is not possible.
That was just one more tool that we had in the toolbox to show parents that we were doing our due diligence to make sure that we werent allowing kids or staff with Covid into the facilities, he said.
Yet its this mentality that worries consultant Kenneth Trump, the president of Cleveland-based National School Safety and Security Services. Security hardware for the sake of public perception, the industry expert said, is simply smoke and mirrors.
Its creating a facade, he said. Parents think that all the bells and whistles are going to keep their kids safer and thats not necessarily the case. With cameras, in the vast majority of schools, nobody is monitoring them.
When the Fulton county district upgraded its surveillance camera network in 2018, officials were wooed by Avigilons AI-powered appearance search, which allows security officials to sift through a mountain of video footage and identify students based on characteristics like their hairstyle or the color of their shirt. When the pandemic hit, the companys mask detection became an attractive add-on, Hildreth said.
He said the district didnt actively advertise the technology to students but they probably became aware of it quickly after students got called out for breaking the rules. He doesnt know students opinions about the cameras or seem to care.
I wasnt probably as much interested in their reaction as much as their compliance, Hildreth said. You dont have to like something thats good for you, but you still need to do it.
A Fulton county district spokesperson said they were unaware of any instances where students were disciplined because the cameras caught them without masks.
Among the school security industrys staunchest critics is Sneha Revanur, a 17-year-old high school student from San Jose, California, who founded the youth-led group Encode Justice to highlight the dangers of artificial intelligence on civil liberties.
Revanur said she was concerned by districts decisions to implement surveillance cameras as a public health strategy and that the technology in schools could result in harsher discipline for students, particularly youth of color.
Verkada offers a cautionary tale. Last year, the company suffered a massive data breach when a hack exposed the live feeds of 150,000 surveillance cameras, including those inside Tesla factories, jails and at Sandy Hook elementary school in Newtown, Connecticut. The Newtown district, which suffered a mass school shooting in 2012, said the breach didnt expose compromising information about students. The vulnerability hasnt deterred some educators from contracting with the California-based company.
After a back-and-forth with the Verkada spokesperson, the company would not grant an interview or respond to a list of written questions.
Revanur called the Verkada hack at Sandy Hook elementary a staggering indictment of educators rush for dragnet surveillance systems that treat everyone as a constant suspect at the expense of student privacy. Constant monitoring, she argued, creates this culture of fear and paranoia that truly isnt the most proactive response to gun violence and safety concerns.
In Fayette county, Georgia, the district spent about $500,000 to buy 70 Hikvision cameras with thermal imaging to detect students with fevers. But it ultimately backtracked and disabled them after community uproar over their efficacy and Hikvisions ties to the Chinese government. In 2019, the US government imposed a trade blacklist on Hikvision, alleging the company was implicated in Chinas campaign of repression, mass arbitrary detention and high-technology surveillance against Muslim ethnic minorities.
The school district declined to comment. In a statement, a Hikvision spokesperson said the company takes all reports regarding human rights very seriously and has engaged governments globally to clarify misunderstandings about the company. The company is committed to upholding the right to privacy, the spokesperson said.
Meanwhile, regional school unit 13s decision to use Verkada security cameras as a contact tracing tool could run afoul of a 2021 law that bans the use of facial recognition in Maine schools. The district didnt respond to requests for comment.
Michael Kebede, the ACLU of Maines policy counsel, cited recent studies on facial recognitions flaws in identifying children and people of color and called on the district to reconsider its approach.
We fundamentally disagree that using a tool of mass surveillance is a way to promote the health and safety of students, Kobede said in a statement. It is a civil liberties nightmare for everyone, and it perpetuates the surveillance of already marginalized communities.
In Fulton county, school officials wound up disabling the face mask detection feature in cafeterias because it was triggered by people eating lunch. Other times, it identified students who pulled their masks down briefly to take a drink of water.
In suburban Houston, Morrow ran into similar hurdles. When white students wore light-colored masks, for example, the face detection sounded alarms. And if students rode bikes to school, the cameras flagged their elevated temperatures.
Weve got some false positives but it was not a failure of the technology, Hildreth said. We just had to take a look and adapt what we were looking at to match our needs.
With those lessons learned, Hildreth said he hoped to soon equip Fulton county campuses with AI-enabled cameras that identify students who bring guns to school.
In a post-pandemic world, Albert Fox Cahn, founder of the non-profit Surveillance Technology Oversight Project, worries the entire school security industry will take a similar approach.
With the pandemic hopefully waning, well see a lot of security vendors pivoting back to school shooting rhetoric as justification for the camera systems, he said. Due to the potential for errors, Cahn called the embrace of AI surveillance in schools really alarming.
This report was published in partnership with the 74, a non-profit, non-partisan news site covering education in America
Originally posted here:
Posted in Artificial Intelligence
Comments Off on Really alarming: the rise of smart cameras used to catch maskless students in US schools – The Guardian
Artificial intelligence is everywhere now. This report shows how we got here. – Popular Science
Posted: March 17, 2022 at 2:18 am
Artificial intelligence is getting cheaper, better at the tasks we assign it, and more widespreadbut concerns over bias, ethics, and regulatory oversight still remain. At a time when AI is becoming accessible to everyone, Stanford University put together a sweeping 2022 report analyzing the ins and outs of the growing field. Here are some of the highlights.
The number of publications alone on the topic tell a story: They doubled in the last decade, from 162,444 in 2010 to 334,497 in 2021. The most popular AI categories that researchers and others published on were pattern recognition, machine learning, and algorithms.
Whats more, the number of patent filings related to AI innovations in 2021 is 30 times greater than the filings in 2015. In 2021, the majority of filed patents were from China, but the majority of patents actually granted were from the US.
The number of users participating in open-source AI software libraries on GitHub also rose from 2015 to 2021. These libraries house collections of computer codes that are used for applications and products. One called TensorFlow remains the most popular, followed by OpenCV, Keras and PyTorch (which Meta AI uses).
Specifically, out of the various tasks that AI can perform, last year, the research community was focused on applying AI to computer vision, a subfield that teaches machines to understand images and videos in order to get good at classifying images, recognizing objects, mapping the position and movement of human body joints, and detecting faces (with and without masks).
[Related: MIT scientists taught robots how to sabotage each other]
For image classification, the most popular database used to train AI models is called ImageNet. Some researchers pre-train their models on additional datasets before exposing them to ImageNet. But models still make mistakes, on average mis-identifying 1 out of 10 images. The model that performs the best is from the Google Brain Team. In addition to identifying images and faces, AI can also generate fake images that are nearly indistinguishable from real ones, and to combat this, researchers have been working on deepfake detection algorithms that are based on datasets like FaceForensics++.
[Related: This new AI tool from Google could change the way we search online]
Natural language processing, a subfield that has been actively explored since the 1950s, is slowly making progress in English language understanding, summarizing, inferring reasonable outcomes, identifying emotional context, speech recognition and transcription, and translation. For basic reading comprehension, AI can perform better than humans, but when language tasks get more complicated, like when interpreting context clues is necessary, humans still have an edge. On the other hand, AI ethicists are worried that bias could affect large language models that draw from a mixed bag of training data.
Tech companies like Amazon, Netflix, Spotify, and YouTube have been improving the AI used in recommendation systems. The same is true for AIs role in reinforcement learning, which has enabled it to react and perform well in virtual games such as chess and Go. Reinforcement learning can also be used to teach autonomous vehicles tasks like changing lanes, or help data models predict future events.
As AI appears to have become better at doing what we want it to do, the cost to train it has come down as well, dropping by over 60 percent since 2018. Meanwhile, a system that wouldve taken 6 minutes to train in 2018 would now only take a little over 13 seconds. Accounting for hardware costs, in 2021, an image classification system would take less than $5 to train, whereas that cost wouldve been over $1,000 in 2017.
More AI applications across industries means more demand for AI education and jobs. Across the US in 2021, California, Texas, New York, and Virginia had the highest demand for AI-related occupations. In the last decade, the most popular specialties among PhD computer science students were artificial intelligence and machine learning.
Private investment in AI is at an all-time high, totalling $93.5 billion in 2021 (double the amount from 2020). AI companies that were skilled in data management, processing, and cloud, according to the report, got the most funding in 2021, followed by companies dedicated to medical and healthcare and financial technology (fintech for short).
In fiscal year 2021, US government agencies spent $1.53 billion on AI research and development for non-defense purposes, which was 2.7 times the amount spent in fiscal year 2018. For defense purposes, the Department of Defense allocated $9.26 billion across 500 AI research and development programs in 2021, which was about 6 percent more than what it spent in the year before. The top two uses of AI were for prototyping technologies and in programs countering weapons of mass destruction.
Last, the report looked at global, federal, and state regulations related to AI (looking for keywords like artificial intelligence, machine learning, autonomous vehicle or algorithmic bias). The report examined 25 countries around the world, and found that they have collectively passed 55 AI-related bills to law from 2016 to 2021. Last year, Spain, the UK and the US each had three AI-related bills that became law.
See the rest here:
Artificial intelligence is everywhere now. This report shows how we got here. - Popular Science
Posted in Artificial Intelligence
Comments Off on Artificial intelligence is everywhere now. This report shows how we got here. – Popular Science
What’s Next in Artificial Intelligence? Three Key Directions – Stanford HAI
Posted: at 2:18 am
After a long winter, the artificial intelligence field has seen a resurgence in the past 15 years as computer power increased and a lot of digital data became available. In the past few years alone, giant language models advanced so quickly to outpace benchmarks, computer vision capabilities took self-driving cars from the lab to the street, and generative models tested democracies during major elections.
But parallel to this technologys rapid rise is its potential for massive harm; technologists, activists, and academics alike began calling for better regulation and understanding of its impact.
This spring, Stanford Institute for Human-Centered AI (HAI) will address three of the most critical areas of artificial intelligence during a one-day conference free and open to all:
Stanford HAI Associate Director and linguistics and computer science professor Christopher Manning, who will cohost the event with HAI Denning Co-director and computer science professor Fei-Fei Li, explains what this conference will cover and who should attend.
This conference will look at key advances in AI. Why are we focusing on foundation models, accountable AI, and embodied AI? What makes these the areas where you expect major growth?
An enormous amount of work is going on in AI in many directions. For a one-day event, we wanted to focus in on a small number of areas that we felt were key to where the most important and exciting research might appear this decade. We ended up focusing on three areas. First, there has been enormous excitement and investment around the development of large pre-trained language models and their generalization to including multiple data modalities that we have named foundation models. Second, there has been an exciting resurgence of work linking AI and robotics, often enabled by the use of simulated worlds, which allow the exploration of embodied AI and grounding. Finally, the increasing concerns about understanding AI decisions and maintaining data privacy in part demand societal and regulatory solutions, but they are also an opportunity for technical AI advances as to how you can produce interpretable AI systems or systems that still work effectively on data that is obscured for privacy reasons.
Who are you excited to hear from?
Ilya Sutskever has been one of the central people at the heart of the resurgence of deep learning-based AI, starting from his breakthrough work on the computer vision system AlexNet with Geoff Hinton in 2012. His impact has grown since he became the chief scientist of Open AI, which among other things has led in the development of foundation models. Im looking forward to hearing more about their latest models such as InstructGPT and what he sees lying ahead.
The recent successes in AI just would not have been possible without the amazing breakthroughs in parallel computing largely led by NVIDIA. Bill Dally is a leader in computer architecture, and, for the last decade, he has been the chief scientist at NVIDIA. He can give us powerful insights into the recent and future advances in parallel computing via GPUs but also insights into the broader range of vision, virtual reality, and other AI research going on at NVIDIA.
And Hima Lakkaraju is a trailblazing Harvard professor developing new strands of work in trustworthy and interpretable machine learning. When AI models are used in high-stakes settings, most times people would like accurate and reliable explanations of why the systems make certain decisions. One exciting direction in Himas work is in developing formal Bayesian models that can give reliable explanations.
Who should attend this conference?
Through a combination of short talks and panel discussions, were trying to achieve a balance between technical depth and accessibility. So on the one hand this conference should be of interest to anyone working in AI as a student, researcher, or developer, but beyond that we hope to be able to convey some of the excitement, results, and progress in these areas to anybody with an interest in AI, whether as a scientist, decision maker, or concerned citizen.
What do you hope your audience will take away from this experience?
I hope the audience will get a deeper understanding of how AI has been able to advance so quickly in the last 15 years, where it might go next, and what we should and shouldnt worry about. I hope people will take away the awesome powers of the huge new foundation models that are being built. But equally they will see why building a model from mountains of digital data is not sufficient, and we want to explore embodied AI models in a physical or simulated world that can learn more as babies learn. And finally, we will see something about how there is now a lot of exciting technical work underway to address the worries and downsides of AI that have been very prominently covered in the media in recent years.
Interested in attending the 2022 HAI Spring Conference? Learn more or register.
Visit link:
What's Next in Artificial Intelligence? Three Key Directions - Stanford HAI
Posted in Artificial Intelligence
Comments Off on What’s Next in Artificial Intelligence? Three Key Directions – Stanford HAI
The Vulnerability of AI Systems May Explain Why Russia Isn’t Using Them Extensively in Ukraine – Forbes
Posted: at 2:18 am
Output of an Artificial Intelligence system from Google Vision, performing Facial Recognition on a ... [+] photograph of a man in San Ramon, California, November 22, 2019. (Photo by Smith Collection/Gado/Getty Images)
The news that Ukraine is using facial recognition software to uncover Russian assailants and identify Ukrainians killed in the ongoing war is noteworthy largely because its one of few documented uses of artificial intelligence in the conflict. A Georgetown University think tank is trying to figure out why while advising U.S. policymakers of the risks of AI.
The CEO of the controversial American facial recognition company Clearview AI told Reuters that Ukraines defense ministry began using its imaging software Saturday after Clearview offered it for free. The reportedly powerful recognition tool relies on artificial intelligence algorithms and a massive quantity of image training data scraped from social media and the internet.
But aside from Russian influence campaigns with their much-discussed deep fakes and misinformation-spreading bots, the lack of known tactical use (at least publicly) of AI by the Russian military has surprised many observers. Andrew Lohn isnt one of them.
Lohn, a senior fellow with Georgetown Universitys Center for Security and Emerging Technology, works on its Cyber-AI Project, which is seeking to draw policymakers attention to the growing body of academic research showing that AI and machine-learning (ML) algorithms can be attacked in a variety of basic, readily exploitable ways.
We have perhaps the most aggressive cyber actor in the world in Russia who has twice turned off the power to Ukraine and used cyber-attacks in Georgia more than a decade ago. Most of us expected the digital domain to play a much larger role. Its been small so far, Lohn says.
We have a whole bunch of hypotheses [for limited AI use] but we dont have answers. Our program is trying to collect all the information we can from this encounter to figure out which are most likely.
They range from the potential effectiveness of Ukrainian cyber and counter-information operations, to an unexpected shortfall in Russian preparedness for digital warfare in Ukraine, to Russias need to preserve or simplify the digital operating environment for its own tactical reasons.
All probably play some role, Lohn believes, but just as crucial may be a dawning recognition of the limits and vulnerability of AI/ML. The willingness to deploy AI tools in combat is a confidence game.
Junk In, Junk Out
Artificial intelligence and machine learning require vast amounts of data, both for training and to interpret for alerts, insights or action. Even when AI/ML have access to an unimpeded base of data, they are only as good as the information and assumptions which underlie them. If for no other reason than natural variability, both can be significantly flawed. Whether AI/ML systems work as advertised is a huge question, Lohn acknowledges.
The tech community refers to unanticipated information as Out of Distribution data. AI/ML may perform at what is deemed to be an acceptable level in a laboratory or in otherwise controlled conditions, Lohn explains. Then when you throw it into the real world, some of what it experiences is different in some way. You dont know how well it will perform in those circumstances.
In circumstances where life, death and military objectives are at stake, having confidence in the performance of artificial intelligence in the face of disrupted, deceptive, often random data is a tough ask.
Lohn recently wrote a paper assessing the performance of AI/ML when such systems scoop in out of distribution data. While their performance doesnt fall off quite as quickly as he anticipated, he says that if they operate in an environment where theres a lot of conflicting data, theyre garbage.
He also points out that the accuracy rate of AI/ML is impressively high but compared to low expectations. For example, image classifiers can work at 94%, 98% or 99.9% accuracy. The numbers are striking until one considers that safety-critical systems like cars/airplanes/healthcare devices/weapons are typically certified out to 5 or 6 decimal points (99.999999%) accuracy.
Lohn says AI/ML systems may still be better than humans at some tasks but the AI/ML community has yet to figure out what accuracy standards to put in place for system components. Testing for AI systems is very challenging, he adds.
For a start, the artificial intelligence development community lacks a test culture similar to what has become so familiar for military aerospace, land, maritime, space or weapons systems; a kind of test-safety regime that holistically assesses the systems-of-systems that make up the above.
The absence of such a back end combined with specific conditions in Ukraine may go some distance to explain the limited application of AI/ML on the battlefield. Alongside it lies the very real vulnerability of AI/ML to the compromised information and active manipulation that adversaries already to seek to feed and to twist it.
Bad Data, Spoofed Data & Classical Hacks
Attacking AI/ML systems isnt hard. It doesnt even require access to their software or databases. Age-old deceptions like camouflage, subtle visual environment changes or randomized data can be enough to throw off artificial intelligence.
As a recent article in the Armed Forces Communications and Electronics Associations (AFCEA) magazine noted, researchers from Chinese e-commerce giant Tencent managed to get a Tesla sedans autopilot (self-driving) feature to switch lanes into oncoming traffic simply by using inconspicuous stickers on the roadway. McAfee Security researchers used similarly discreet stickers on speed limit signs to get a Tesla to speed up to 85 miles per hour in a 35 mile-an-hour zone.
An Israeli soldier is seen during a military exercise in the Israeli Arab village of Abu Gosh on ... [+] October 20, 2013 in Abu Gosh, Israel. (Photo by Lior Mizrahi/Getty Images)
Such deceptions have probably already been examined and used by militaries and other threat actors Lohn says but the AI/ML community is reluctant to openly discuss exploits that can warp its technology. The quirk of digital AI/ML systems is that their ability to sift quickly through vast data sets - from images to electromagnetic signals - is a feature that can be used against them.
Its like coming up with an optical illusion that tricks a human except with a machine you get to try it a million times within a second and then determine whats the best way to effect this optical trick, Lohn says.
The fact that AI/ML systems tend to be optimized to zero in on certain data to bolster their accuracy may also be problematic.
Were finding that [AI/ML] systems may be performing so well because theyre looking for features that are not resilient, Lohn explains. Humans have learned to not pay attention to things that arent reliable. Machines see something in the corner that gives them high accuracy, something humans miss or have chosen not to see. But its easy to trick.
The ability to spoof AI/ML from outside joins with the ability to attack its deployment pipeline. The supply chain databases on which AI/ML rely are often open public databases of images or software information libraries like GitHub.
Anyone can contribute to these big public databases in many instances, Lohn says. So there are avenues [to mislead AI] without even having to infiltrate.
The National Security Agency has recognized the potential of such data poisoning. In January, Neal Ziring, director of NSAs Cybersecurity Directorate, explained during a Billington CyberSecurity webinar that research into detecting data poisoning or other cyber attacks is not mature. Some attacks work by simply seeding specially crafted images into AI/ML training sets, which have been harvested from social media or other platforms.
According to Ziring, a doctored image can be indistinguishable to human eyes from a genuine image. Poisoned images typically contain data that can train the AI/ML to misidentify whole categories of items.
The mathematics of these systems, depending on what type of model youre using, can be very susceptible to shifts in the way recognition or classification is done, based on even a small number of training items, he explained.
Stanford cryptography professor Dan Boneh told AFCEA that one technique for crafting poisoned images is known as the fast gradient sign method (FGSM). The method identifies key data points in training images, leading an attacker to make targeted pixel-level changes called perturbations in an image. The modifications turn the image into an adversarial example, providing data inputs that make the AI/ML misidentify it by fooling the model being used. A single corrupt image in a training set can be enough to poison an algorithm, causing misidentification of thousands of images.
FGSM attacks are white box attacks, where the attacker has access to the source code of the AI/ML. They can be conducted on open-source AI/ML for which there are several publicly accessible repositories.
You typically want to try the AI a bunch of times and tweak your inputs so they yield the maximum wrong answer, Lohn says. Its easier to do if you have the AI itself and can [query] it. Thats a white box attack.
If you dont have that, you can design your own AI that does the same [task] and you can query that a million times. Youll still be pretty effective at [inducing] the wrong answers. Thats a black box attack. Its surprisingly effective.
Black box attacks where the attacker only has access to the AI/ML inputs, training data and outputs make it harder to generate a desired wrong answer. But theyre effective at producing random misinterpretation, creating chaos Lohn explains.
DARPA has taken up the problem of increasingly complex attacks on AI/ML that dont require inside access/knowledge of the systems being threatened. It recently launched a program called Guaranteeing AI Robustness against Deception (GARD), aimed at the development of theoretical foundations for defensible ML and the creation and testing of defensible systems.
More classical exploits wherein attackers seek to penetrate and manipulate the software and networks that AI/ML run on remain a concern. The tech firms and defense contractors crafting artificial intelligence systems for the military have themselves been targets of active hacking and espionage for years. While Lohn says there has been less reporting of algorithm and software manipulation, that would be potentially be doable as well.
It may be harder for an adversary to get in and change things without being noticed if the defender is careful but its still possible.
Since 2018, the Army Research Laboratory (ARL) along with research partners in the Internet of Battlefield Things Collaborative Research Alliance, looked at methods to harden the Armys machine learning algorithms and make them less susceptible to adversarial machine learning techniques. The collaborative developed a tool it calls Attribution-Based Confidence Metric for Deep Neural Networks in 2019 to provide a sort of quality assurance for applied AI/ML.
Despite the work, ARL scientist Brian Jalaian told its public affairs office that, While we had some success, we did not have an approach to detect the strongest state-of-the-art attacks such as [adversarial] patches that add noise to imagery, such that they lead to incorrect predictions.
If the U.S. AI/ML community is facing such problems, the Russians probably are too. Andrew Lohn acknowledges that there are few standards for AI/ML development, testing and performance, certainly nothing like the Cybersecurity Maturity Model Certification (CMMC) that DoD and others adopted nearly a decade ago.
Lohn and CSET are trying to communicate these issues to U.S. policymakers not to dissuade the deployment of AI/ML systems, Lohn stresses, but to make them aware of the limitations and operational risks (including ethical considerations) of employing artificial intelligence.
Thus far he says, policymakers are difficult to paint with a broad brush. Some of those Ive talked with are gung-ho, others are very reticent. I think theyre beginning to become more aware of the risks and concerns.
He also points out that the progress weve made in AI/ML over the last couple of decades may be slowing. In another recent paper he concluded that advances in the formulation of new algorithms have been overshadowed by advances in computational power which has been the driving force in AI/ML development.
Weve figured out how to string together more computers to do a [computational] run. For a variety of reasons, it looks like were basically at the edge of our ability to do that. We may already be experiencing a breakdown in progress.
Policymakers looking at Ukraine and at the world before Russias invasion were already asking about the reliability of AI/ML for defense applications, trying to gauge the level of confidence they should place in it. Lohn says hes basically been telling them the following;
Self driving cars can do some things that are pretty impressive. They also have giant limitations. A battlefield is different. If youre in a permissive environment with an application similar to existing commercial applications that have proven successful, then youre probably going to have good odds. If youre in a non-permissive environment, youre accepting a lot of risk.
Read the original post:
Posted in Artificial Intelligence
Comments Off on The Vulnerability of AI Systems May Explain Why Russia Isn’t Using Them Extensively in Ukraine – Forbes
Picsart Launches AI-Generated Fonts, Paving the Way for Asset Creation by Artificial Intelligence – Business Wire
Posted: at 2:18 am
MIAMI--(BUSINESS WIRE)--Picsart, the worlds leading digital creation platform and a top 20 most downloaded app worldwide, today announced its first ever end-to-end solution for fonts generated by artificial intelligence.
This project began as part of a Picsart hackathon last year and developed into a full fledged AI-generated font solution. Creators on the Picsart platform can already access and apply over 30 of these unique fonts as part of Picsart Gold, with additional fonts added monthly.
In the last decade, weve seen a shift in written communication becoming increasingly visual, said Anush Ghambaryan, Director of AI and Machine Learning at Picsart. As the demand for visual communication increases, so does the need for new and unlimited options for creation. Fonts are one of the most popular features in our entire content library, and this new technology opens the door for infinite font creation.
Picsart AI Research (PAIR) develops new fonts with AI by training models with a large dataset of selected fonts, allowing the models to create glyphs - letters, symbols and numbers - from the provided input, like a font related keyword or tag. The technology creates thousands of glyphs, and then converts the glyphs to a vectorized image, after passing additional checks such as quality control, and creates a font file in the most common font file types: .TTF or .OTF format.
At PAIR, were on an exciting mission to innovate and develop the best AI tools and products to empower creative communication for everyone, said Humphrey Shi, Chief Scientist at Picsart and PAIR Founder and Lead. And releasing this new font generation solution is a great start. The future applications for designers, creators and businesses to use this technology is exciting as it will reduce time and cost, and increase the possibilities for design and communication.
Picsart launched PAIR last year with Shi to accelerate AI research, development and products at this new innovation hub, which has already released industry leading research and tools - including a one-tap Background Remove tool.
This news comes just after the company made its world-class creative tools available to businesses through the opening of its API. The API offering includes AI and image tools that make photos and videos stand out, and processes them faster. Picsart also has plans to integrate AI fonts into its API offering. To apply these fonts to your own creations, download the app or visit picsart.com.
About Picsart
Picsart is the worlds largest digital creation platform and a top 20 most downloaded app. Every month, the Picsart community creates, remixes, and shares billions of visual stories using the companys powerful and easy-to-use editing tools. Picsart has amassed one of the largest open-source content collections in the world, including free-to-edit photos, stickers, backgrounds, templates, and more. Picsart is available in 30 languages for free and as a subscription on iOS, Android, Windows devices and on the Web. Headquartered in Miami, with offices around the world, Picsart is backed by SoftBank, Sequoia Capital, DCM Ventures, Insight Partners, and others. Download the app or visit picsart.com for more information.
Read the original here:
Posted in Artificial Intelligence
Comments Off on Picsart Launches AI-Generated Fonts, Paving the Way for Asset Creation by Artificial Intelligence – Business Wire
The Use of Artificial Intelligence as a Strategy to Analyse Urban Informality – ArchDaily
Posted: at 2:18 am
The Use of Artificial Intelligence as a Strategy to Analyse Urban Informality
Or
Within the Latin American and Caribbean region, it has been recorded that at least 25% of the population lives in informal settlements.Given that their expansion is one of the major problems afflicting these cities, a project is presented, supported by the IDB, which proposes how new technologies are capable of contributing to the identification and detection of these areas in order to intervene in them and help reduce urban informality.
Informal settlements, also known as slums, shantytowns, camps or favelas, depending on the country in question, are uncontrolled settlements on land where, in many cases, the conditions for a dignified life are not in place. Through self-built dwellings, these sites are generally the result of the continuous growth of the housing deficit.
For decades, the possibility of collecting information about the Earth's surface through satellite imagery has been contributing to the analysis and production of increasingly accurate and useful maps for urban planning. In this way, not only the growth of cities can be seen, but also the speed at which they are growing and the characteristics of their buildings.
Advances in artificial intelligence facilitate the processing of a large amount of information.When a satellite or aerial image is taken of a neighbourhood where a municipal team has previously demarcated informal areas, the image is processed by an algorithm that will identify the characteristic visual patterns of the area observed from space.The algorithm will then identify other areas with similar characteristics in other images, automatically recognising the districts where informality predominates.It is worth noting that while satellites are able to report both where and how informal settlements are growing, specialised equipment and processing infrastructure are also required.
This particular project brings to the table the role of artificial intelligence in detecting and acting on informal settlements in Colombia where, during 2018, the population exceeded 48 million inhabitants, with three out of four people residing in cities. In fact, it is estimated that by 2050 it will increase by 28% with an urban population in equal or greater proportion. Thus, there is a real need to build new urban homes.
The Government of Colombia appointed the National Planning Department (DNP) to support the Ministry of Housing in defining new methodologies to address the problem of informal housing. In 2021, the DNP was supported by the Housing and Urban Development Division of the IDB and the company GIM and carried out a pilot project using artificial intelligence to obtain detailed information on Colombian informal housing. The Mayor's Office of Barranquilla provided the data for the project.
In practice, it was demonstrated that there was a coincidence of about 85% between the areas delimited by the algorithm maps and those produced by the local specialists, which was sufficient to recognise and prioritise those areas in need of intervention.
The idea is to be able to use this system in other regions. The IDB seek to promote this technology used in Barranquilla to all of Latin America and the Caribbean through a software package called AMISAI (Automated Mapping of Informal Settlements with Artificial Intelligence), which is part of the Open Urban Planning Toolbox, a catalogue of digital tools used for open-source urban planning.
Source:- Luz Adriana Moreno Gonzlez, Vronique de Laet, Hctor Antonio Vzquez Brust, Patricio Zambrano Barragn, Can Artificial Intelligence Help Reduce Urban Informality?
Originally posted here:
The Use of Artificial Intelligence as a Strategy to Analyse Urban Informality - ArchDaily
Posted in Artificial Intelligence
Comments Off on The Use of Artificial Intelligence as a Strategy to Analyse Urban Informality – ArchDaily
UA researchers’ startup uses artificial intelligence to detect the ‘fingerprints of disease’ – Inside Tucson Business
Posted: at 2:18 am
When cells in the body become diseased, their signals and molecules change, sometimes long before symptoms emerge. New technology out of the University of Arizona aims to detect these metabolic changes with machine learning to hopefully catch diseases sooner and expedite the healing process.
A new startup company co-founded by two UA researchers uses artificial intelligence to identify these fingerprints of disease, possibly before the issue is detectable through other means. Ruslan Rafikov and Olga Rafikova, both associate professors in the UA College of Medicine, launched MetFora with help from Tech Launch Arizona, the UA office that commercializes inventions from university research.
While the research that led to MetFora originally focused on the lungs, this technology has the potential to detect diseases in a wide variety of organs.
The idea came from our animal research on pulmonary hypertension. We found that if we induce pulmonary hypertension in animals, before they produce any physiological effects in the lungs, they are changing their metabolic profile very quickly. It can be after only three days. That gave us the idea that we check these same changes in a patients blood, Rafikov said. This is important because people usually struggle to get a diagnosis. They can spend years from the first symptoms to get a correct diagnosis. If we can detect it as soon as possible, it will be a more impactful treatment.
While human researchers may manually monitor the molecules, Rafikov describes it as a very complex process involving more than a dozen metabolites.
Its so complex that in our finding, training AI is more important than finding a way to train a physician to find these tiny metabolites, said Rafikov, who is an affiliate of the American Thoracic Society and the American Association for the Advancement of Science. Each disease affects different organs, and within the organs, each disease can affect different types of cells. And once the cells are affected, they change their fine-tuned metabolic fluxes in and out of the cell. Using this knowledge, we think we can determine the exact problem in each organ.
The process involves a blood draw that is then tested with mass spectrometry. However, because of the complicated nature of the test, it would not be able to be conducted in-hospital, and the blood would need to be shipped to MetForas lab for testing through an AI statistical analysis.
The model for most diagnostics is some sort of kit or instrument that the hospital has, but this would not be that. The most natural way to bring it to market is through a lab developed test, said MetFora CEO Martin Fuchs. The test time isnt long at all. Its really about getting the sample to us in our lab and getting the results back. And we see that taking about two to four days.
Fuchs was introduced to the researchers through Arizona FORGE, an office under the same umbrella as Tech Launch Arizona that helps foster entrepreneurship across the university campus.
One of the things that caught my eye is that it has applicability to a broad range of diseases, said Fuchs, who has launched two other technology companies. We really see this as something everyone can get during their annual doctors visit. Because its a simple blood draw, I feel like it could become a standard of care.
Although Tech Launch Arizona does not create the startups themselves, they help researchers through the process and work to negotiate licenses.
Doug Hockstad, assistant vice president for TLA, says the university system has 250 to 300 inventions disclosed every year. For researchers and their inventions, TLA assigns licensing managers to help them through the process of commercialization.
They do a market analysis to see what companies are operating in a particular space, and a patent landscape analysis to see what patents are out there in the same area. That information is taken back to the inventor, and we more or less jointly come up with a plan on whether or not were going to target this invention to an existing company or target this to create a startup company around the technology, Hockstad said. Technologies coming out of universities are at a very early stage. Often, much too early for an existing company to be interested. So with a lot of technologies, the right way to go is to create a startup around it.
Hockstad says the market analysis will sometimes discover that there is no market for the product, or that its too heavily patented already.
We try not to pick and choose winners. At the stage these are at, its very hard to foresee what is going to be successful or not, Hockstad said.
TLA has helped organize more than 100 startups since 2013, with more than 19 startups in fiscal year 2020 alone. According to a report from the McGuire Center for Entrepreneurship at the UAs Eller College of Management, startups associated with TLA have generated more than $25 million in state and local taxes and more than 5,000 new jobs.
We do everything from so ware to curriculum to therapeutic compounds to medical devices, Hockstad said. It runs the gamut.
Most recently, MetFora was one of four medical technology companies that participated in the startup competition Venture Madness in Phoenix on March 2 and 3.
We are very grateful to TLA for the support. They helped with patents and regulatory analysis and mentoring. Weve even had seasoned executives from area companies help us through this process, Fuchs said. They really provided the impetus to get the company started.
Go here to see the original:
Posted in Artificial Intelligence
Comments Off on UA researchers’ startup uses artificial intelligence to detect the ‘fingerprints of disease’ – Inside Tucson Business