Daily Archives: September 20, 2021

Navalny allies accuse Telegram and other platforms of censorship – Al Jazeera English

Posted: September 20, 2021 at 9:32 am

Jailed Kremlin critic Alexei Navalnys allies are accusing YouTube and Telegram of censorship after the video platform and messaging app restricted access to their anti-government voting recommendations for Russias parliamentary election.

The latest accusations came on Saturday, one day after Navalnys allies had already accused Alphabets Google and Apple of buckling under Kremlin pressure after the companies removed an app from their stores that the activists had hoped to use against the ruling party in the election.

Voting began on Friday and ran until late on Sunday.

Telegram, the social media platform used by protesters from Iran to Belarus, blocked a smart voting channel aimed at defeating ruling party nominees, which carried recommendations for candidates in Russias parliamentary elections.

The app gives detailed recommendations on who to vote for in an effort to challenge the party that backs President Vladimir Putin. It is one of the few levers Navalnys allies have left after a sweeping crackdown this year.

Telegrams founder Pavel Durov, who has carved out a libertarian image and resisted past censorship, said the platform would block election campaign services, including one used by Navalnys allies to give voter recommendations.

He said the decision had been taken because of a Russian ban on campaigning once polls are open, which he considered legitimate and is similar to bans in many other countries.

Navalnys spokeswoman Kira Yarmysh condemned the move.

Its a real disgrace when the censorship is imposed by private companies that allegedly defend the ideas of freedom, she wrote on Twitter.

Ivan Zhdanov, a political ally of Navalny, said he did not believe Telegrams justification and that the move looked to have been agreed somehow with Russias authorities.

Late on Saturday, Navalnys camp said YouTube had also taken down one of their videos that contained the names of 225 candidates they endorsed.

The video presentation of the smart voting recommendations for the constituencies with the nastiest (United Russia candidates) has also been removed, they wrote.

Navalnys camp said it was not a knockout blow as their voting recommendations were available elsewhere on social media.

But it is seen as a possible milestone in Russias crackdown on the internet and its standoff with US tech firms.

Russia has for years sought sovereignty over its part of the internet, where anti-Kremlin politicians have followings and media critical of Putin operate.

Navalnys team uses Googles YouTube widely to air anti-corruption videos and to stream coverage and commentary of anti-Kremlin protests they have staged.

Russias ruling United Russia party, which supports President Vladimir Putin, retained its parliamentary majority although its performance was slightly weaker than at the last parliamentary election in 2016 andfollows the biggest crackdown on the Kremlins domestic opponents in years.

The Navalny teams Telegram feed continued to function normally on Saturday and included links to voter recommendations available in Russia via Google Docs.

On a separate Telegram feed also used by the team, activists said Russia had told Google to remove the recommendations in Google Docs and that the US company had, in turn, asked Navalnys team to take them down.

Google did not immediately respond to a request for comment from the Reuters news agency.

In his statement, Durov said Google and Apples restrictions of the Navalny app had set a dangerous precedent and meant Telegram, which is widely used in Russia, was more vulnerable to government pressure.

He said Telegram depends on Apple and Google to operate because of their dominant position in the mobile operating system market and his platform would not have been able to resist a Russian ban from 2018 to 2020 without them.

Russia tried to block Telegram in April 2018 but lifted the ban more than two years later after ostensibly failing to block it.

The app block by Apple and Google creates a dangerous precedent that will affect freedom of expression in Russia and the whole world, Durov said in a post on Telegram.

Visit link:
Navalny allies accuse Telegram and other platforms of censorship - Al Jazeera English

Posted in Libertarianism | Comments Off on Navalny allies accuse Telegram and other platforms of censorship – Al Jazeera English

Navalny allies accuse Telegram of censorship in Russian election – Yahoo Finance

Posted: at 9:32 am

By Tom Balmforth

MOSCOW, Sept 18 (Reuters) - Allies of jailed Kremlin critic Alexei Navalny accused Telegram of censorship on Saturday after the popular messaging app followed Google and Apple in restricting access to their voting campaign in Russia's parliamentary election.

The activists have already accused Alphabet's Google and Apple of buckling under Kremlin pressure after they removed an app from their stores that Navalny's allies had hoped to use against the ruling party at the election.

The app gives detailed recommendations on who to vote for in an effort to challenge the party that backs President Vladimir Putin. It is one of the few levers Navalny's allies have left after a sweeping crackdown this year.

Telegram's founder Pavel Durov, who has carved out a libertarian image and resisted past censorship, said the platform would block election campaign services, including one used by Navalny's allies to give voter recommendations.

He said the decision had been taken because of a Russian ban on campaigning once polls are open, which he considered legitimate and is similar to bans in many other countries.

Navalny's spokeswoman Kira Yarmysh condemned the move.

"It's a real disgrace when the censorship is imposed by private companies that allegedly defend the ideas of freedom," she wrote on Twitter.

Ivan Zhdanov, a political ally of Navalny, said he did not believe Telegram's justification and that the move looked to have been agreed somehow with Russia's authorities.

Navalny's camp said it was not a knockout blow as their voting recommendations were available elsewhere on social media.

But it is seen as a possible milestone in Russia's crackdown on the internet and its standoff with U.S. tech firms.

Russia has for years sought sovereignty over its part of the internet, where anti-Kremlin politicians have followings and media critical of Putin operate.

'DANGEROUS PRECEDENT'

Story continues

The ruling United Russia Party is still widely expected to win the election despite a ratings slump. The voting, which opened on Friday and runs through Sunday, follows the biggest crackdown on the Kremlin's domestic opponents in years.

The Navalny team's Telegram feed continued to function normally on Saturday, and included links to voter recommendations available in Russia via Google Docs.

On a separate Telegram feed also used by the team, activists said Russia had told Google to remove the recommendations in Google Docs and that the U.S. company had in turn asked Navalny's team to take them down.

Google did not immediately respond to a request for comment.

In his statement, Durov said Google and Apple's restrictions of the Navalny app had set a dangerous precedent and meant Telegram, which is widely used in Russia, was more vulnerable to government pressure.

He said Telegram depends on Apple and Google to operate because of their dominant position in the mobile operating system market and his platform would not have been able to resist a Russian ban from 2018 to 2020 without them.

Russia tried to block Telegram in April 2018 but lifted the ban more than two years later after ostensibly failing to block it.

"The app block by Apple and Google creates a dangerous precedent that will affect freedom of expression in Russia and the whole world," Durov said in a post on Telegram. (Reporting by Tom Balmforth; Additional reporting by Anton Zverev and Alexander Marrow; Editing by David Clarke)

Continue reading here:
Navalny allies accuse Telegram of censorship in Russian election - Yahoo Finance

Posted in Libertarianism | Comments Off on Navalny allies accuse Telegram of censorship in Russian election – Yahoo Finance

What the Proposed Tax Deal Means for Your Business – Inc.

Posted: at 9:32 am

The House Democrats released their tax proposal with big headlines about increasing taxes on the rich and big corporations. But if you have read my articles before, you know I care less about big corporations and more about small businesses. Where do they fit in to the tax code changes? Is tax reform good or bad for them? Some small businesses will benefit from the proposed changes. Here's who wins and loses with this new tax proposal:

The Winners: Small corporations earning less than $5 million

Currently corporations are federally taxed at a flat rate of 21% of adjusted net income. The proposal in the house would create three new tax brackets for corporations:

Small corporations earning less than $5 million per year in income will actually experience a tax decrease in this proposal, saving up to $12k in tax expense. Not huge, but not bad.

Small businesses competing against foreign companies

Substantial portions of the tax reform package seek to close loopholes or deductions taken by foreign entities or domestic entities paying foreign taxes. Small businesses who cannot afford the scale or reach of international operations will benefit from a leveling of the playing field as they compete with foreign entities facing larger tax burdens.

Anyone waiting for the IRS

The proposal includes $79 billion of additional IRS funding for enforcement of new provisions. That is more than a 6x increase over the 2021 budget!

While entrepreneurs and libertarians generally cringe at the idea of more IRS bureaucrats issuing audits and investigating tax filings, there is a downside to our currently low IRS staffing: slow responses, terrible customer service, and delayed tax refunds.

Accounting departments nationwide have been struggling to file basic and time-sensitive forms like change in entity tax elections, often going 10+ months without confirmation or response from the IRS. It is all but impossible to contact IRS customer service now, with their 800-number automatically hanging up on callers after 3-hours on hold. The worst part is most small businesses are still awaiting their 2020 income tax returns five months after filing.

The hope is extra IRS funding means more staff to process refunds, filings, and business negotiations faster.

The Losers of Tax Reform

In general, the more profit you earn the more you stand to lose from tax reform. Here's a list of the losers in the current tax proposal in the house:

Corporations earning more than $5 million per year

High income corporations are facing a new income bracket of 26.5%. In fact, if you earn more than $10 million per year, you will have ALL your income taxed at 26.5% rather than just incremental income.

Pass-through entities earning more than $400k per year

High income S-corps, partnerships, and sole proprietors are facing three headwinds in the tax reforms. The top-tax bracket for personal income taxes (which affects pass-through entities like partnerships, S-corps, and sole proprietors) will be increased from 37% to 39.6%.

Second, the threshold for this tax bracket will be lowered, meaning a new set of earners will suddenly qualify for the top tax bracket. The new bracket limit for individuals will be $400k (down from $523k) and $450k (down fro $628k) for married filing jointly.

Lastly, high income pass-through entities will be disqualified from the Qualified Business Income Deduction. The QBID (commonly known as the pass-through tax deduction) is a deduction worth up to 20% of your income. However, the rules for QBID are complex and include phase-outs for businesses with higher income, business activity, or even what year it is. It is difficult to know how much your business benefited from the QBID without reviewing your tax return.

The proposed tax reform eliminated the QBDI for anyone earning more than $500k/yr. jointly or $400k/yr. for a single individual. Consult with your fractional CFO or CPA to determine whether or not this would affect you.

Owners that sell businesses or business assets

The highest long-term capital gains tax rate (which applies to most businesses) would rise from 20% to 25%. This has a large impact on businesses that buy and sell appreciating assets like real estate, collectables, stocks, or even your business itself.

In fact, the most popular way to avoid paying capital gains taxes, known as section 1202, is also weakened in the proposed tax reforms. The gains exclusion would drop from 100% to 50%, creating up to $5 million per year in additional capital gains taxes per transaction.Individuals with lots of money in retirement accounts

The new tax legislation seeks to limit the use of qualified retirement accounts, like IRAs and Roth IRA's, based on the total amount of money someone has in such accounts.

Overall Impact of Tax Changes on Small Businesses

The Biden administration and House Democrats have successfully targeted high-income corporations and business owners in these reforms. Although there are some benefits in the legislation, in aggregate, The House's proposed tax changes would be a burden on high income small businesses.

These terms are being actively negotiated in congress, so do not get too excited. There is still a chance that none of it will happen. Here's my recommendation to small business owners facing the prospect of high tax liabilities:

In reality, most small businesses will not change anything in light of the new tax structure. There are dozens of business elements more impactful on cash flow than your tax strategy - sales and marketing strategy, operations strategy, pricing strategy, exit strategy... As much as we wish we had a fleet of corporate accountants to find every tax loophole, that is not economically realistic for small businesses. Do your diligence, collaborate with your financial team, but always stay focused on fundamentals to ensure success.

The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

View original post here:
What the Proposed Tax Deal Means for Your Business - Inc.

Posted in Libertarianism | Comments Off on What the Proposed Tax Deal Means for Your Business – Inc.

NY redistricting commissions obscenely partisan maps defy will of voters (Guest Opinion by Mark Braiman) – syracuse.com

Posted: at 9:32 am

Mark Braiman, of Cazenovia, is treasurer of Madison County Libertarians.

Here is a sure-fire recipe for short-circuiting the open redistricting process New York voters demanded with a 2014 constitutional amendment. Start with a shift of New Yorks primaries from September to June; add a pandemic that delayed 2020 Census results by several months; and toss into the mix a partisan deadlock on the Independent Redistricting Commission. With the process now under extreme time pressure, the unforeseen consequence will likely be the states three top state politicians sitting in a room somewhere to doodle out the final maps a few days before legislative approval; or else a federal court doing it all on its own, without any input from the states politicians or voters.

New York Democrats feel an urgent need to engage in pre-emptive gerrymandering to counter what will happen in red states. This is unappealing behavior but seems inevitable. I am nevertheless greatly irritated that this national gerrymandering war impacts me directly, in the form of the Democratic IRC members proposed sea-serpent-shaped Central New York district extending from Tompkins County to Utica, with a neck through northern Madison County. This would be the first time in its 215-year history for Madison County to be divided between Congressional districts. My home is so close to the obnoxiously arbitrary boundary, it will take a lot of scrutiny before I can discern which side I live on.

Forcing incumbent Republican Reps. John Katko and Claudia Tenney into the same district could be accomplished without dividing my county or any county at all. The combined 2020 populations of Onondaga, Madison and Oneida counties is 776,657. This is almost exactly the ideal district size of 776,971 (1/26 of the state population). Drawing a new congressional district from just these three counties would satisfy the Democrats urge to force Tenney (Oneida County) and Katko (Onondaga County) to compete against each other, without dismembering Madison or other counties. This can furthermore be done without forcing any other anomalies in the surrounding districts, as can be mathematically proven. (See map for Upstate Congressional Districts that I have just proposed to the IRC at MarkBraiman.com). This map keeps every NY county undivided between Congressional districts, excepting of course the nine over 776,971 in size.

The Democrats on the IRC have also proposed obscenely gerrymandered New York Senate districts for Madison and Onondaga Counties. In their map, Madison is one of the few lucky small Upstate counties that escapes being divided into multiple Senate districts. However, it is once again thrown in with a motley collection of barely contiguous Onondaga County towns, henceforth to bear the appearance of a grotesque bobcat, curled almost all the way around the city of Syracuse in an act of animalistic self-grooming.

Speaking of animalistic behavior, the Republican members of the IRC have responded with a map that is just as obnoxiously partisan, despite featuring much simpler-shaped state Senate districts for Madison and Onondaga Counties. Their map takes Madison County entirely out of Sen. Rachel Mays Syracuse district entirely reasonable but puts her and fellow incumbent Democratic Sen. John Mannion into a single elongated district not so reasonable. In the process, the Republicans propose to split Onondaga County into four distinct Senate districts. None of these are contained entirely within Onondaga County, despite it having a population 1.5 times the ideal State Senate district size of 320,655. Could the two nonpartisan members of the IRC have the integrity to stand up and say, A plague on both your houses!?

In sum, both Democratic and Republican wings of the IRC have put raw partisan self-interests over the reasonable and constitutionally mandated goal of keeping small counties intact wherever possible. The New York Constitution, Article IV, section 4, paragraph (c)(6) states clearly: The requirements that senate districts not divide counties or towns ... shall remain in effect. These requirements have been part of our state Constitution for nearly 250 years, but over the past half-century have increasingly been breached for partisan purposes.

Dividing smaller Upstate counties between multiple congressional and legislative districts puts unnecessary burdens on voters, to figure out what races they are voting in. It thereby alienates us further from the electoral process. It also burdens these small counties Boards of Elections, by unnecessarily increasing the number of races they have to count.

More important, the ongoing violation of constitutional districting provisions since the 1970s has weakened the voices of local leaders in state government. It has likely contributed to the growth of state mandates on counties and other local governments, for example the requirement for counties to fund Medicaid using property taxes.

Whatever the need may be to divide large Downstate New York counties and cities among multiple districts in order to keep these districts nearly equal in size, this need is not present for smaller Upstate jurisdictions, as my math shows. This keeps every Upstate city, village, and town undivided , as well as all 49 of the counties with a 2020 population under 320,655. It also follows another key precept of fairness to all counties, by guaranteeing each of the other six larger counties north of New York City (including Onondaga) at least one core Senate district entirely within the county. It even manages to do all this without forcing Sens. May and Mannion, who live barely five miles apart, into the same Senate district.

Also in Opinion: Editorial cartoons for Sept. 19, 2021: Gen. Milleys back channel, Bidens Covid mandate, California recall

Read the original post:
NY redistricting commissions obscenely partisan maps defy will of voters (Guest Opinion by Mark Braiman) - syracuse.com

Posted in Libertarianism | Comments Off on NY redistricting commissions obscenely partisan maps defy will of voters (Guest Opinion by Mark Braiman) – syracuse.com

WWE Mayor Kane Defies Authority, Will Not Comply with Vaccine Mandate – Bleeding Cool News

Posted: at 9:32 am

|

Former WWE Superstar turned Mayor of Knox County, Tennessee, Kane, may have once been a stooge for The Authority of Triple H and Stephanie McMahon, but when it comes to a Democratic president, it's another story. Mayor Kane unleashed hellfire and brimstone on President Joe Biden, rival of Mayor Kane's fellow WWE Hall of Famer former president Donald Trump, over Biden's COVID-19 vaccine mandates. According to The Big Red Machine, Knox County Tennesee will not comply with the federal rules.

Mayor Kane tweeted:

He added:

In the letter, Mayor Kane accuses Biden of violating the Constitution with the order. "Mr. President, if we as elected officials ignore, disregard, and contravene the laws which bind us, how can we expect our fellow citizens to respect and follow the laws which bind all of us as a society?" asked The Devil's Favorite Demon while vowing to ignore, disregard, and contravene Biden's executive order.Mayor Kane also went on to take President Biden to task for the war in Afghanistan, which makes sense, since the only time Kane thinks Americans should travel to the Middle East is when they're teaming with The Undertaker to battle Triple H and Shawn Michaels in front of the Saudi Royal Family.

Under the leadership of Mayor Kane, the only Libertarian political figure to receive the endorsements of both Senator Rand Paul and Bryan Danielson, Knox County is currently experiencing a coronavirus inspection spike higher than at any other time during the pandemic, which is no surprise, considering Mayor Kane opposes pretty much every effort to stem the disease's spread. Kane has previously complained about bans on large gatherings after it prevented him from speaking at an event known as the Juggalo Gathering for Libertarians. Kane was later forced to apologize to Knox County's own Board of Health after cutting a shoot promo on them over coronavirus safety protocols. Later, it was reported that 975 COVID-19 vaccines went missing under Mayor Kane's regime, though it was later found that the vaccines were accidentally thrown in the trash and not, as originally reported, stolen.

View original post here:
WWE Mayor Kane Defies Authority, Will Not Comply with Vaccine Mandate - Bleeding Cool News

Posted in Libertarianism | Comments Off on WWE Mayor Kane Defies Authority, Will Not Comply with Vaccine Mandate – Bleeding Cool News

Proud Boy Afghan refugee hunting permit stickers found on University of Michigan campus – MLive.com

Posted: at 9:31 am

ANN ARBOR, MI Several Proud Boy stickers marked as Afghan refugee hunting permits were discovered on the University of Michigan campus by a student recently.

A student spotted the insensitive stickers and reported them to police on Sunday, Sept. 12, according to Rick Fitzgerald, University of Michigan spokesman.

The stickers, which each had Proud Boy and Afghan Refugee Hunting Permit written on them were found on various properties near the universitys West Hall. They were removed by the student who found them, Fitzgerald said.

The stickers had a permit number of 09*11*01 with no bag limit and no expiration to hunt and kill Afghan refugees nationwide.

It is unknown where the stickers came from or who placed them. The matter remains under investigation, according to officials.

Bigotry has no place on this campus, Fitzgerald said.

The stickers were discovered a day after the 20-year anniversary of the Sept. 11, 2001, terrorist attacks which led to the U.S. invasion of Afghanistan. After 20 years in Afghanistan, U.S. withdrew from the country in August in what was described as a chaotic evacuation where thousands of refugees were left in limbo, struggling to get out of the country.

Several private organizations in Michigan are taking in refugees including the Jewish Family Services in Ann Arbor. Grand Rapids is expected to take about 500 refugees by the end of the month.

As Michigan prepares to receive Afghan refugees, Grand Rapids vigil honors their struggle

The Proud Boys is described as a far-right organization that uses intimidation to instigate conflict while regularly spouting white nationalist memes and maintaining affiliations with known extremists, according to the Southern Poverty Law Center.

The organization marched in Kalamazoo in September of 2020 which ended in violence between the group and counter protestors.

Why the Proud Boys visited Kalamazoo

Anyone with information about the incident is asked to contact University of Michigan Division of Public Safety and Security at 734-763-1131.

More from MLive:

Man tells police he was robbed by friend at gunpoint after showing him $1,700 engagement ring

Michigan State University, Henry Ford Health join forces to bolster cancer, health care research

Ypsilanti advances proposal to allow accessory apartments on 3K more properties

Originally posted here:

Proud Boy Afghan refugee hunting permit stickers found on University of Michigan campus - MLive.com

Posted in Proud Boys | Comments Off on Proud Boy Afghan refugee hunting permit stickers found on University of Michigan campus – MLive.com

Improved algorithms may be more important for AI performance than faster hardware – VentureBeat

Posted: at 9:30 am

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!

When it comes to AI, algorithmic innovations are substantially more important than hardware at least where the problems involve billions to trillions of data points. Thats the conclusion of a team of scientists at MITs Computer Science and Artificial Intelligence Laboratory (CSAIL), who conducted what they claim is the first study on how fast algorithms are improving across a broad range of examples.

Algorithms tell software how to make sense of text, visual, and audio data so that they can, in turn, draw inferences from it. For example, OpenAIs GPT-3 was trained on webpages, ebooks, and other documents to learn how to write papers in a humanlike way. The more efficient the algorithm, the less work the software has to do. And as algorithms are enhanced, less computing power should be needed in theory. But this isnt settled science. AI research and infrastructure startups like OpenAI and Cerberus are betting that algorithms will have to increase in size substantially to reach higher levels of sophistication.

The CSAIL team, led by MIT research scientist Neil Thompson, who previously coauthored a paper showing that algorithms were approaching the limits of modern computing hardware, analyzed data from 57 computer science textbooks and more than 1,110 research papers to trace the history of where algorithms improved. In total, they looked at 113 algorithm families, or sets of algorithms that solved the same problem, that had been highlighted as most important by the textbooks.

The team reconstructed the history of the 113, tracking each time a new algorithm was proposed for a problem and making special note of those that were more efficient. Starting from the 1940s to now, the team found an average of eight algorithms per family of which a couple improved in efficiency.

For large computing problems, 43% of algorithm families had year-on-year improvements that were equal to or larger than the gains from Moores law, the principle that the speed of computers roughly doubles every two years. In 14% of problems, the performance improvements vastly outpaced those that came from improved hardware, with the gains from better algorithms being particularly meaningful for big data problems.

The new MIT study adds to a growing body of evidence that the size of algorithms matters less than their architectural complexity. For example, earlier this month, a team of Google researchers published a study claiming that a model much smaller than GPT-3 fine-tuned language net (FLAN) bests GPT-3 by a large margin on a number of challenging benchmarks. And in a 2020 survey, OpenAI found that since 2012, the amount of compute needed to train an AI model to the same performance on classifying images in a popular benchmark, ImageNet, has been decreasing by a factor of two every 16 months.

Theres findings to the contrary. In 2018, OpenAI researchers released a separate analysis showing that from 2012 to 2018, the amount of compute used in the largest AI training runs grew more than 300,000 times with a 3.5-month doubling time, exceeding the pace of Moores law. But assuming algorithmic improvements receive greater attention in the years to come, they could solve some of the other problems associated with large language models, like environmental impact and cost.

In June 2020, researchers at the University of Massachusetts at Amherst released a report estimating that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide, equivalent to nearly 5 times the lifetime emissions of the average U.S. car. GPT-3 alone used 1,287 megawatts during training and produced 552 metric tons of carbon dioxide emissions, a Google study found the same amount emitted by 100 average homes electricity usage over a year.

On the expenses side, a Synced report estimated that the University of Washingtons Grover fake news detection model cost $25,000 to train; OpenAI reportedly racked up $12 million training GPT-3; and Google spent around $6,912 to train BERT. While AI training costs dropped 100-fold between 2017 and 2019, according to one source, these amounts far exceed the computing budgets of most startups and institutions let alone independent researchers.

Through our analysis, we were able to say how many more tasks could be done using the same amount of computing power after an algorithm improved, Thompson said in a press release. In an era where the environmental footprint of computing is increasingly worrisome, this is a way to improve businesses and other organizations without the downside.

Read more from the original source:

Improved algorithms may be more important for AI performance than faster hardware - VentureBeat

Posted in Ai | Comments Off on Improved algorithms may be more important for AI performance than faster hardware – VentureBeat

Abductive inference: The blind spot of artificial intelligence – TechTalks

Posted: at 9:30 am

Welcome toAI book reviews, a series of posts that explore the latest literature on artificial intelligence.

Recent advances in deep learning have rekindled interest in the imminence of machines that can think and act like humans, or artificial general intelligence. By following the path of building bigger and better neural networks, the thinking goes, we will be able to get closer and closer to creating a digital version of the human brain.

But this is a myth, argues computer scientist Erik Larson, and all evidence suggests that human and machine intelligence are radically different. Larsons new book, The Myth of Artificial Intelligence: Why Computers Cant Think the Way We Do, discusses how widely publicized misconceptions about intelligence and inference have led AI research down narrow paths that are limiting innovation and scientific discoveries.

And unless scientists, researchers, and the organizations that support their work dont change course, Larson warns, they will be doomed to resignation to the creep of a machine-land, where genuine invention is sidelined in favor of futuristic talk advocating current approaches, often from entrenched interests.

From a scientific standpoint, the myth of AI assumes that we will achieve artificial general intelligence (AGI) by making progress on narrow applications, such as classifying images, understanding voice commands, or playing games. But the technologies underlying these narrow AI systems do not address the broader challenges that must be solved for general intelligence capabilities, such as holding basic conversations, accomplishing simple chores in a house, or other tasks that require common sense.

As we successfully apply simpler, narrow versions of intelligence that benefit from faster computers and lots of data, we are not making incremental progress, but rather picking the low-hanging fruit, Larson writes.

The cultural consequence of the myth of AI is ignoring the scientific mystery of intelligence and endlessly talking about ongoing progress on deep learning and other contemporary technologies. This myth discourages scientists from thinking about new ways to tackle the challenge of intelligence.

We are unlikely to get innovation if we choose to ignore a core mystery rather than face it up, Larson writes. A healthy culture for innovation emphasizes exploring unknowns, not hyping extensions of existing methods Mythology about inevitable success in AI tends to extinguish the very culture of invention necessary for real progress.

You step out of your home and notice that the street is wet. Your first thought is that it must have been raining. But its sunny and the sidewalk is dry, so you immediately cross out the possibility of rain. As you look to the side, you see a road wash tanker parked down the street. You conclude that the road is wet because the tanker washed it.

This is an example inference, the act of going from observations to conclusions, and is the basic function of intelligent beings. Were constantly inferring things based on what we know and what we perceive. Most of it happens subconsciously, in the background of our mind, without focus and direct attention.

Any system that infers must have some basic intelligence, because the very act of using what is known and what is observed to update beliefs is inescapably tied up with what we mean by intelligence, Larson writes.

AI researchers base their systems on two types of inference machines: deductive and abductive. Deductive inference uses prior knowledge to reason about the world. This is the basis of symbolic artificial intelligence, the main focus of researchers in the early decades of AI. Engineers create symbolic systems by endowing them with a predefined set of rules and facts, and the AI uses this knowledge to reason about the data it receives.

Inductive inference, which has gained more traction among AI researchers and tech companies in the past decade, is the acquisition of knowledge through experience. Machine learning algorithms are inductive inference engines. An ML model trained on relevant examples will find patterns that map inputs to outputs. In recent years, AI researchers have used machine learning, big data, and advanced processors to train models on tasks that were beyond the capacity of symbolic systems.

A third type of reasoning, abductive inference, was first introduced by American scientist Charles Sanders Peirce in the 19th century. Abductive inference is the cognitive ability to come up with intuitions and hypotheses, to make guesses that are better than random stabs at the truth.

For example, there can be numerous reasons for the street to be wet (including some that we havent directly experienced before), but abductive inference enables us to select the most promising hypotheses, quickly eliminate the wrong ones, look for new ones and reach a reliable conclusion. As Larson puts it in The Myth of Artificial Intelligence, We guess, out of a background of effectively infinite possibilities, which hypotheses seem likely or plausible.

Abductive inference is what many refer to as common sense. It is the conceptual framework within which we view facts or data and the glue that brings the other types of inference together. It enables us to focus at any moment on whats relevant among the ton of information that exists in our mind and the ton of data were receiving through our senses.

The problem is that the AI community hasnt paid enough attention to abductive inference.

Abduction entered the AI discussion with attempts at Abductive Logic Programming in the 1980s and 1990s, but those efforts were flawed and later abandoned. They were reformulations of logic programming, which is a variant of deduction, Larson told TechTalks.

Abduction got another chance in the 2010s as Bayesian networks, inference engines that try to compute causality. But like the earlier approaches, the newer approaches shared the flaw of not capturing true abduction, Larson said, adding that Bayesian and other graphical models are variants of induction. In The Myth of Artificial Intelligence, he refers to them as abduction in name only.

For the most part, the history of AI has been dominated by deduction and induction.

When the early AI pioneers like [Alan] Newell, [Herbert] Simon, [John] McCarthy, and [Marvin] Minsky took up the question of artificial inference (the core of AI), they assumed that writing deductive-style rules would suffice to generate intelligent thought and action, Larson said. That was never the case, really, as should have been earlier acknowledged in discussions about how we do science.

For decades, researchers tried to expand the powers of symbolic AI systems by providing them with manually written rules and facts. The premise was that if you endow an AI system with all the knowledge that humans know, it will be able to act as smartly as humans. But pure symbolic AI has failed for various reasons. Symbolic systems cant acquire and add new knowledge, which makes them rigid. Creating symbolic AI becomes an endless chase of adding new facts and rules only to find the system making new mistakes that it cant fix. And much of our knowledge is implicit and cannot be expressed in rules and facts and fed to symbolic systems.

Its curious here that no one really explicitly stopped and said Wait. This is not going to work! Larson said. That would have shifted research directly towards abduction or hypothesis generation or, say, context-sensitive inference.

In the past two decades, with the growing availability of data and compute resources, machine learning algorithmsespecially deep neural networkshave become the focus of attention in the AI community. Deep learning technology has unlocked many applications that were previously beyond the limits of computers. And it has attracted interest and money from some of the wealthiest companies in the world.

I think with the advent of the World Wide Web, the empirical or inductive (data-centric) approaches took over, and abduction, as with deduction, was largely forgotten, Larson said.

But machine learning systems also suffer from severe limits, including the lack of causality, poor handling of edge cases, and the need for too much data. And these limits are becoming more evident and problematic as researchers try to apply ML to sensitive fields such as healthcare and finance.

Some scientists, including reinforcement learning pioneer Richard Sutton, believe that we should stick to methods that can scale with the availability of data and computation, namely learning and search. For example, as neural networks grow bigger and are trained on more data, they will eventually overcome their limits and lead to new breakthroughs.

Larson dismisses the scaling up of data-driven AI as fundamentally flawed as a model for intelligence. While both search and learning can provide useful applications, they are based on non-abductive inference, he reiterates.

Search wont scale into commonsense or abductive inference without a revolution in thinking about inference, which hasnt happened yet. Similarly with machine learning, the data-driven nature of learning approaches means essentially that the inferences have to bein the data, so to speak, and thats demonstrably not true of many intelligent inferences thatpeople routinelyperform, Larson said. We dont just look to the past, captured, say, in a large dataset, to figure out what to conclude or think or infer about the future.

Other scientists believe that hybrid AI that brings together symbolic systems and neural networks will have a bigger promise of dealing with the shortcomings of deep learning. One example is IBM Watson, which became famous when it beat world champions at Jeopardy! More recent proof-of-concept hybrid models have shown promising results in applications where symbolic AI and deep learning alone perform poorly.

Larson believes that hybrid systems can fill in the gaps in machine learningonly or rules-basedonly approaches. As a researcher in the field of natural language processing, he is currently working on combining large pre-trained language models like GPT-3 with older work on the semantic web in the form of knowledge graphs to create better applications in search, question answering, and other tasks.

But deduction-induction combos dont get us to abduction, because the three types of inference are formally distinct, so they dont reduce to each other and cant be combined to get a third, he said.

In The Myth of Artificial Intelligence, Larson describes attempts to circumvent abduction as the inference trap.

Purely inductively inspired techniques like machine learning remain inadequate, no matter how fast computers get, and hybrid systems like Watson fall short of general understanding as well, he writes. In open-ended scenarios requiring knowledge about the world like language understanding, abduction is central and irreplaceable. Because of this, attempts at combining deductive and inductive strategies are always doomed to fail The field needs a fundamental theory of abduction. In the meantime, we are stuck in traps.

The AI communitys narrow focus on data-driven approaches has centralized research and innovation in a few organizations that have vast stores of data and deep pockets. With deep learning becoming a useful way to turn data into profitable products, big tech companies are now locked in a tight race to hire AI talent, driving researchers away from academia by offering them lucrative salaries.

This shift has made it very difficult for non-profit labs and small companies to become involved in AI research.

When you tie research and development in AI to the ownership and control of very large datasets, you get a barrier to entry for start-ups, who dont own the data, Larson said, adding that data-driven AI intrinsically creates winner-take-all scenarios in the commercial sector.

The monopolization of AI is in turn hampering scientific research. With big tech companies focusing on creating applications in which they can leverage their vast data resources to maintain the edge over their competitors, theres little incentive to explore alternative approaches to AI. Work in the field starts to skew toward narrow and profitable applications at the expense of efforts that can lead to new inventions.

No one at present knows how AI would look in the absence of such gargantuan centralized datasets, so theres nothing really on offer for entrepreneurs looking to compete by designing different and more powerful AI, Larson said.

In his book, Larson warns about the current culture of AI, which is squeezing profits out of low-hanging fruit, while continuing to spin AI mythology. The illusion of progress on artificial general intelligence can lead to another AI winter, he writes.

But while an AI winter might dampen interest in deep learning and data-driven AI, it can open the way for a new generation of thinkers to explore new pathways. Larson hopes scientists start looking beyond existing methods.

In The Myth of Artificial Intelligence, Larson provides an inference framework that sheds light on the challenges that the field faces today and helps readers to see through the overblown claims about progress toward AGI or singularity.

My hope is that non-specialists have some tools to combat this kind of inevitability thinking, which isnt scientific, and that my colleagues and other AI scientists can view it as a wake-up call to get to work on the very real problems the field faces, Larson said.

View original post here:

Abductive inference: The blind spot of artificial intelligence - TechTalks

Posted in Ai | Comments Off on Abductive inference: The blind spot of artificial intelligence – TechTalks

This AI could predict 10 years of scientific prioritiesif we let it – MIT Technology Review

Posted: at 9:30 am

Thesurveycommittee, which receives input from a host of smaller panels, takes into account a gargantuan amount of information to create research strategies. Although the Academies wont release the committees final recommendation to NASA for a few more weeks, scientists are itching to know which of their questions will make it in, and which will be left out.

The Decadal Survey really helps NASA decide howtheyregoing to lead the future of human discovery in space, soitsreally important thattheyrewell informed, saysBrant Robertson, a professor of astronomy and astrophysics at UC Santa Cruz.

One teamof researcherswants to useartificial intelligenceto make this process easier. Their proposal isnt for a specific mission or line of questioning; rather, they say,their AI can help scientists make tough decisions about which other proposals to prioritize.

The idea is that by training an AI to spot research areas that are either growing or declining rapidly, the tool could make it easier for survey committees and panels to decide what should make the list.

What we wanted was to have a system that would do a lot of the work that the Decadal Survey does, and let the scientists working on the Decadal Survey do what they will do best, saysHarley Thronson, a retired senior scientist at NASAs Goddard Space Flight Center and lead authorof the proposal.

Although members of each committee are chosen for their expertise in their respective fields,itsimpossible for every member to grasp the nuance of every scientific theme. The number of astrophysics publications increases by 5%every year, according to the authors. Thats a lot for anyone to process.

Thats where Thronsons AI comes in.

It took just over a year to build, but eventually, Thronsons team was able to train it on more than 400,000 pieces of research published in the decade leading up to the Astro2010 survey. They were also able to teach the AI to sift through thousands of abstracts toidentifyboth low-and high-impact areasfromtwo-and three-word topic phrases likeplanetary systemorextrasolar planet.

According to theresearcherswhitepaper, the AI successfullybackcastedsix popular research themesofthe last 10 years, including a meteoric rise in exoplanet research and observation of galaxies.

One of the challenging aspects of artificial intelligence is that they sometimes will predict, or come up with, or analyze things that are completely surprising to the humans, says Thronson. And we saw this a lot.

Thronson and his collaborators think the steering committee should use their AI to help review and summarize the vast amounts of text the panel must sift through, leaving human experts to makethe final call.

Their research isnt the first to try to use AI to analyze and shape scientific literature. Other AIs have already been usedto help scientistspeer-reviewtheircolleagueswork.

But could it be trusted with a task as important and influential as the DecadalSurvey?

Read the rest here:

This AI could predict 10 years of scientific prioritiesif we let it - MIT Technology Review

Posted in Ai | Comments Off on This AI could predict 10 years of scientific prioritiesif we let it – MIT Technology Review

AI Disruption: What VCs Are Betting On – Forbes

Posted: at 9:30 am

Venture Capital concept image with business icons and copyspace.For more variation of this image ... [+] please visit my portfolio

According to data from PitchBook, the funding for AI deals has continued its furious pace.In the latest quarter, the amount invested came to a record $31.6 billion.Note that there were 11 deals the closed more than $500 million.

Granted, plenty of these startups will fade away or even go bust.But of course, some will ultimately disrupt industries and change the landscape of the global economy.

To be disrupted, you have to believe the AI is going to make 10x better recommendations than whats available today, said Eric Vishria, who is a General Partner at Benchmark.I think that is likely to happen in really complex, high dimensional spaces, where there are so many intermingled factors at play that finding correlations via standard analytical techniques is really difficult.

So then what are some of the industries that are vulnerable to AI disruption?Well, lets see where some of the top VCs are investing today:

Software Development:There have been advances in DevOps and IDEs.Yet software development remains labor intensive.And it does not help that its extremely difficult to recruit qualified developers.

But AI can make a big difference.Advancements in state-of-the-art natural language processing algorithms could revolutionize software development, initially by significantly reducing the boilerplate code that software developers write today and in the long-run by writing entire applications with little assistance from humans, said Nnamdi Iregbulem, who is a Partner at Lightspeed Venture Partners.

Consider the use of GPT-3, which is a neural network that trains models to create content.Products like GitHub Copilot, which are also based on GPT-3, will also disrupt software development, said Jai Das, who is the President and Partner at Sapphire Ventures.

Cybersecurity:This is one of the biggest software markets.But the technologies really need retooling.After all, there continues to be more and more breaches and hacks.

Cybersecurity is likely to turn into an AI-vs-AI game very soon, said Deepak Jeevankumar, who is a Managing Director at Dell Technologies Capital.Sophisticated attackers are already using AI and bots to get over defenses.

Construction:This is a massive industry and will continue to grow, as the global population continues to increase.Yet construction has seen relatively small amounts of IT investment.But AI could be a game changer.

An incremental 1% increase in efficiency can mean millions of dollars in cost savings, said Shawn Carolan, who is a Managing Partner at Menlo Ventures.There are many companies, like Openspace.ai, doing transformative work using AI in the construction space. Openspace leverages AI and machine vision to essentially become a photographic memory for job sites. It automatically uploads and stitches together images of a job site so that customers can do a virtual walk-through and monitor the project at any time.

Talent Management:HR has generally lagged with innovation.The fact is that many of the processes are manual and inefficient.

But AI can certainly be a solution. In fact, AI startups like Eightfold.ai have been able to post substantial growth in the HR category.In June, the company announced funding of $220 million, which was led by the SoftBank Vision Fund 2.

Every single company is talking about talent as a key priority, and the companies that embrace AI to find better candidates faster, cheaper, at scale, they have a true competitive advantage, said Kirthiga Reddy, who is a Partner at SoftBank.Understanding how to use AI to amplify the interactions in the talent lifecycle is a differentiator and advantage for these businesses."

Drug Discovery: The development of the Covid-19 vaccinesfrom companies like Pfizer, Moderna and BioNTechhas highlighted the power of innovation in the healthcare industry.But despite this, there is still much be done.The fact is that drug development is costly and time-consuming.

It's becoming impossible to process these large datasets without using the latest AI/ML technologies, said Dusan Perovic, who is a partner at Two Sigma Ventures.Companies that are early adopters of these data science tools and thereby are able to analyze larger datasets are going to make faster progress than companies that rely on older data analytics tools.

Tom (@ttaulli) is an advisor/board member to startups and the author of Artificial Intelligence Basics: A Non-Technical Introduction, The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems and Implementing AI Systems: Transform Your Business in 6 Steps. He also has developed various online courses, such as for the COBOL.

See the rest here:

AI Disruption: What VCs Are Betting On - Forbes

Posted in Ai | Comments Off on AI Disruption: What VCs Are Betting On – Forbes