More Than Two Thirds Of States Are Pushing Highly Controversial (And Likely Unconstitutional) Bills To Moderate Speech Online – Techdirt

from the the-moral-panic-to-end-all-moral-panics dept

Over the last year and a half, weve had plenty of stories about how various state legislators are shoving each other aside to pass laws to try to regulate speech online. Of course, thats generally not how they put it. They claim that theyre regulating social media, and making lots of (highly questionable) assumptions insisting that social media is somehow bad. And this is coming from both sides of the traditional political spectrum. Republicans are pushing bills to compel websites to host speech, while Democrats are pushing bills to compel websites to censor speech. And sometimes they team up to push horrible, dangerous, unconstitutional legislation for the children.

Over at Politco, Rebecca Kern has done an amazing job cataloging this rush by state legislators across the country to push these laws almost all of which are likely unconstitutional. Its depressing as anything, and in a few decades when we look back and talk about the incredibly ridiculous moral panic over social media, maps like these will be front and center:

You should read Kerns full article, as it breaks the various bills down into four categories: banning censorship, reporting hateful content, regulating algorithms, and mandating transparency including interesting discussions on each category.

Of course, as youll note in the chart above, while Texas, Florida, and New York are the only states so far to pass such laws, the Florida and Texas ones are both on hold due to courts recognizing their problems. While New Yorks only passed bill (it has more in the hopper) perhaps isnt quite as bad as Floridas and Texas, its still awful and hopefully someone will challenge the constitutionality of it as well.

However, part of the problem is that for the apparently dwindling collection of people who still believe in free speech online, all of these bills (and many of the states listed above arent doing just one bill, but multiple crazy bills all at once) are creating a sort of distributed denial of service attack on free speech advocates.

We simply cant respond to every crazy new bill in every crazy state legislature trying to regulate speech online. We (and here I mean literally us at the Copia Institute) are trying to help educate and explain to policymakers all across the country how dangerous and backwards most of these bills are. But were a tiny, tiny team with extremely little resources.

Yet, at the same time, many in the media (without noting that they compete with social media for ad dollars) seem to be cheering on many of these bills.

And, speaking of free speech advocates, it is beyond disappointing in Kerns article to see the Knight First Amendment Institute, which Ive worked with many times, and which I respect, quoted as supporting some of these clearly unconstitutional bills. There seems to have been an unfortunate shift in the Institutes support for free speech over the last year or so. Rather than protecting the 1st Amendment, it has repeatedly staked out weird positions that seem designed to chip away at the 1st Amendment protections that are so important.

For example, they apparently see the ability to regulate algorithms as possibly not violating the 1st Amendment, which is crazy:

However, Wilkens, of the Knight First Amendment Institute, said that while the bill may implicate the First Amendment, it doesnt mean that it violates the First Amendment. He said that while its still up for interpretation, the legislation if it became law may be held constitutional because the states interest here in protecting young girls seems to be a very strong interest.

Im not going to go deep on why this is disconnected from reality both the idea that the bill being discussed (Californias AB 2048) would protect young girls (it wouldnt) and that it might be constitutional (it obviously is not), but its distressing beyond belief that yet another institution that has taken in many millions of dollars (way more than Copia has received in nearly 25 years of existence) is now fighting against the 1st Amendment rather than protecting it.

Theres a war going on against online speech these days, and much of it is happening in state houses, where it is very, very difficult for the remaining advocates of online speech to be heard. And its not helping that others who claim to be supporters of free speech are out there actively undermining it.

Filed Under: 1st amendment, california, florida, free speech, online speech, regulating social media, state legislatures, states, texas

Link:

More Than Two Thirds Of States Are Pushing Highly Controversial (And Likely Unconstitutional) Bills To Moderate Speech Online - Techdirt

A Little Representation Goes a Long Way – The Dispatch

Dear Reader (excluding those of you unhealthily bothered by other peoples admittedly weird handiwork),

Stop me if youve heard this one before.

No, wait: You are powerless to stop me if youve heard this one before, hah! In 1970, Richard Nixon nominated G. Harrold Carswell to fill Abe Fortas seat on the Supreme Court. Critics charged that Carswell was a decidedly mediocre jurist. Sen. Roman Hruskas defense of Carswell and the nomination is considered a minor classic in political spin. In a TV interview, he said, Even if he were mediocre, there are a lot of mediocre judges and people and lawyers. They are entitled to a little representation, arent they? We cant have all Brandeises and Frankfurters and Cardozos.

I like this anecdote for a bunch of reasons. Hruska was a good man and he had a perfectly respectableat times even laudatorypolitical career. This episode is the only thing hes remembered for by those other than his friends and family and some Nebraska political junkies. It got ample space in his obituaries, and its a good cautionary tale about how small slips of the tongue can end up defining you.

Scattergories.

But what I really like about this story is how it mangles a way of thinking about representation. Theres a category error buried in it.

I dont like the Stanford Encyclopedia of Philosophys entry on category errors because it reduces them to infelicitous statements. On the other hand, I do like its examples of the infelicity of category errors: The number two is blue, The theory of relativity is eating breakfast, or Green ideas sleep furiously.

I love statements like that because they expose how language can become visible to our brains when it makes connections between things we dont expect to be connected. For instance, there are a bunch of versions of the following joke:

Q: What is the difference between an orange?

A: A pencil. Because a vest has no sleeves.

If you laugh at this, its because your brain cant make sense of it, so you enjoy the absurdity. And I think part of that enjoyment stems from the recognition of how language drives how we think about stuff. We like to think language is bound up with rationality. The words we use align with reality, and reality is governed by reason in some fundamental sense: 2+2 = 4 because when I take two rocks and add two more rocks, I get four rocks. But language doesnt have to be bound by reason. I can say two plus two equals a duck, but, so far, reality cant make that happen. In other words, language can put distance between the world and our brains.

A more reliable form of humor points out connections between things we either dont see or thought we were the only ones to notice. A whole branch of comedy boils down to Did you ever notice ? These jokes work because they confirm pre-rational intuitions or make irrational connections between things like cause and effect. Dont believe me? Pull my finger and Ill prove it to you.

Anyway, the reason I dont like reducing category errors to merely absurd statements is that I think category errors are the bane of politics. Everyone recognizes that the theory of relativity is breakfast is nonsense. But when Chris Rock said Barack Obama was the dad of the country, lots of very smart people nodded. Of course, lots of conservatives rolled their eyes, but not out of rejection of a category error. Partisan animosity did most of the work getting those eyes to roll. Likewise, when supporters of Trumpor Reagan or Eisenhower or whomevermade similar statements, partisan opponents rolled their eyes. The idea that the president is the father of the American family is a bit of political boilerplate going back to George Washington. But at least Washingtons claim to that metaphorical title depended on the act of creating the country in the first place.

But the idea that the president is akin to a parent is a category error. The president is not my boss. Hes definitely not my father. He has no power, moral or legal, to tell me how to live my life beyond the very limited power of persuasion and a few contestable and narrow emergency powers. My dad could tell me to give my seat to a lady on the bus, and he did it many times. The president cant.

The body politiccorpus politicumis one of the most fraught category errors in history. It was tolerable as a mystical medieval metaphor, but in the 19th and 20th century, intellectuals grabbed all sorts of pseudo-scientific nonsense off the shelf and argued that nation states were organic entities. Herbert Croly, one of the co-founders of The New Republic, said society was just an enlarged individual. Edward Alsworth Ross, arguably the most influential sociologist of his day, believed society is a living thing, actuated, like all the higher creatures, by the instinct for self-preservation. When Woodrow Wilson rejected the system of checks and balances inherent to the Constitution, it was in service to these ideas. He rejected the vision of the Founders as naively Newtonian rather than Darwinian. The trouble with the [Founders] theory, Wilson wrote, is that government is not a machine, but a living thing. It falls, not under the theory of the universe, but under the theory of organic life. It is accountable to Darwin, not to Newton. It is modified by its environment, necessitated by its tasks, shaped to its functions by the sheer pressure of life. No living thing can have its organs offset against each other, as checks, and live.

Wilson was wrong in every regard. Government is a machine in the sense that it is technology, a manufactured system designed for specific purposes. It is not in any way a living thing bound by the theory of organic life. Checks and balances work precisely because Congress isnt like a spleen and the judiciary isnt like a liver. Moreover, Im not entirely sure that our organs dont work against each other in a checks-and-balancey kind of way insofar as various organs regulate each other. But I could be wrong about that.

Nazis were obsessed with the idea that the Aryan nation was an organic entity, and that idea gave them permission to see other groups as parasites.

Now, some stickler might object to what Im talking about by arguing that these theories were just bad metaphors and analogies. And thats fine. But when we dont consciously recognize that an idea is merely metaphoricalnever mind a bad metaphorwe take it to be literal, or close enough to literal to act as if it were.

You could say category errors we like are just called metaphors or analogies. Its sort of like censorship. Pretty much everyone is in favor of censorship, but we only use the word censorship for the kinds of censorship we dont like. I used to have great fun arguing with libertarians of the right and left about this. Theyd say something like, Im against all forms of censorship. And Id respond, Socratically, So you think its fine for TV networks to replace Saturday morning cartoons with mock snuff films or simulated child pornography? (I have to insert the mock and simulated qualifiers to avoid clever but real snuff films and child pornography are illegal rejoinders). Eventually, most would end up arguing that censoring that stuff isnt really censorship, its just responsible programming or some other euphemism. Naw, its censorship, and Im fine with that.

Similarly, with metaphors and analogies, if you dont regularly push back or poke holes in them, people come to accept them as descriptors of reality.

Bonfire of the mediocrities.

I had no idea Id be spelunking down this rabbit hole. I planned on writing about the problems with our elites, but Ill save that for another time. Like the runza peddler said at the Cornhusker game, lets just circle back to Sen. Hruska.

The other thing I love about Hruskas representation-for-mediocrities argument is that it mangles the concept of representation. On the surface it kind of makes sense, like an intellectual Potemkin village. For starters, the Supreme Court is not a representative bodyor at least its not supposed to be. Forget the identity politics arguments about how the court is improved by, say, the presence of a wise Latina in ways that it wouldnt be improved by a wise Nordic. Why not put plumbers or electricians on the court? Dont they deserve representation, too? Although the court has always been top-heavy with Pale Penis People, its been utterly monopolized by lawyers.

Its sort of like the term diversity. Everyone likes to say theyre in favor of diversity, but diversitymuch like censorshipis very narrowly defined. We dont think the NBA would be improved if there was a quota to get more one-legged players or blind people on the court. When I talk to my financial adviser about diversifying my portfolio, I never say, Make sure theres a healthy balance between good investments and bad investments. A balanced diet doesnt have a lot of strychnine or razor blades in it.

The idea that the court would be improved by mediocrity takes the familiar political logic of representation and exposes how it can take us in ridiculous directions if we dont recognize its limitations. Its funny precisely because it exposes how serious ideas can suddenly become silly by grabbing something from the wrong category and shoving it where it doesnt belong.

Theres an unwritten rule not to verbalize such things. But a lot of the dysfunction in our politics is Hruskian in reality: Lots of people are fine with mediocrities representing them as long as they represent their team. Hruska supported Carswell because he was Nixons pick and Nixon deserved a win. Run through the list of politicians garnering passionate support from partisans. Some are smart, many are dumb. Some know how to do their jobs, many dont have the first clue how policy is made or legislating is done. But the important question is: How often does intelligence or competence even enter into it?

As with diversity and censorship, representation is a broad category that we narrow down in realitycertain kinds of diversity, specific forms of censorship. If we understood representation in its broadest, most categorical sense, Congress should reflect a broad cross section of Americans that would include everything from morons to geniuses, violent criminals to pacifists, physicists to spoken-word poets. But we understand that the filter has to be set with a narrower screen.

The problem is that we have the filter on the wrong settings. If I want to hire an electrician, I might consider all sorts of factors: price, recommendations, availability, etc. But the indispensable qualification would be expertise. I would immediately rule out all people who arent electricians. In other words, can they do the job?

Marjorie Taylor Greeneto take a very easy exampleis an ignoramus. She doesnt understand the job she was elected to, but even if she did, she couldnt do it because shes not on any committees (because shes also a bigoted loon). But Republican voters just renominated her, presumably on the grounds that what Congress needs is representation of bigoted lunacy and performative jackassery.

Most other politicians arent elected for such ludicrous reasons. But many of them are elected to perform and entertain in ways that have nothing to do with the job itself. Alexandria Ocasio-Cortez is no fool and she has an adequate academic grasp of the job, but shes also among the least effective members of Congress. Shed have to step up her game to be a mediocre legislator if effective legislating determined the bulk of her grade. But it doesnt for her voters, or for the media that lavishes attention and praise on her.

When it comes to hiring a politician, there are a bunch of things that can or should be on the checklist: ideological agreement, good character, patriotism, a good work ethic, a record of success, etc. You can even include things like religion, height, attractiveness, or odor. This is a democracy after all, and people can vote for whatever reason they want. But one of the things that should be non-negotiablenot as a matter of law, but as a matter of civic hygieneis the candidates ability to do the job.

But for a lot of voters, the job description has been rewritten without even a minute of debate or discussion. Do they hate the other guys enough? Are they entertaining? Are they angry enough? Are they loyal to my team?

No wonder so few can do the actual job. Thats not what they were hired for.

Various & Sundry

Canine update: So Zo has been a bit melancholy of late. We dont know why, but she skipped a couple meals and is less interested in what is traditionally the source of her greatest joys: chasing rabbits and squirrels. Were keeping a close eye on her. It may just be age and the heat, which saps energy from the best of us. Pippa, meanwhile, is doing great, and having lots of fun with her spaniel buddy on the midday walks. Speaking of her pack, meet Willie, the newest member. Her limp seems to be permanently behind her (knock on wood). While both girls are passionately patriotic, they really hate the fireworks on the Fourth of July. Man-made thunder makes no sense to them.

ICYMI

Last Fridays G-File

Last weekends giant-sized, extra-patriotic Ruminant

Last weekends Dispatch Podcast on our post-Roe moment

The Remnant with FTC Commissioner Noah Phillips

This weeks Dispatch Live

Against a congressional criminal referral

Wednesdays newsletter

The Remnant with Noah Rothman

And now, the weird stuff

Adios

Gentlemans agreement

Fresh incentives

Fishy deals

The Peppa effect

Force choked

To kill a talking bird

Just making sure

Read the original:

A Little Representation Goes a Long Way - The Dispatch

Turn the other tweet: NYPD not heeding Adams’ call to censor violence on social media – Gothamist

Mayor Eric Adams has one of the biggest bully pulpits in the country and for months hes used it to drive home this message: Get rid of violent imagery on social media.

Look at what we are showing now on social media, the mayor said during a May interview on Pix 11. We should be using artificial intelligence to identify words, identify phrases, to immediately remove and censor some of this information.

He later added, The type of violence that's being promoted on social media is beyond anything I've ever witnessed before.

The mayor was responding to the online history of two recent mass shooting suspects. The man accused of the April subway shooting in Sunset Park had posted videos of violent ramblings on social media, and the suspect in the Buffalo grocery store shooting was live-streaming as the horror unfolded. The postings hurled Big Tech into the spotlight and inspired city and state leaders, including Gov. Kathy Hochul, to demand more from internet companies when it comes to policing the violence on their platforms.

The attack on social media has been a recurring theme in the mayors rhetoric, but his May remarks came just days after his own police department posted surveillance footage of violent perpetrators pointing guns at their victims.

The New York City Police Department has long used social media to share information on crimes under investigation and to get the publics help finding suspects. Surveillance footage and imagery have become commonplace on the departments Twitter and Facebook pages. But as technology progressed, so did the frequency of graphic imagery on the departments online channels, creating a cycle of sometimes shockingly graphic imagery being shared online, picked up by local news outlets, and transmitted across the airwaves.

So while the mayor has been inveighing against the varied images of violence by civilians, theres been no shortage of it streaming from the NYPDs social media channels. The mayors office declined to comment, but the NYPD told Gothamist there was value in showing video of certain crimes in progress because it might motivate the public to help catch criminals.

The footage is often raw and unedited, except for the obscuring of victims faces. The posts often get picked up and shared by local media outlets and distributed on other social media platforms.

A tweet from June 7th showed a suspect tossing a 52-year-old woman onto subway tracks in the Bronx. A post on May 25th showed a 37-year-old woman getting violently kicked in the head and falling onto her back. On May 16th, the department posted footage on Twitter of a suspect in Queens beating a 24-year-old man over the head with a firearm. Another post from May 11th showed a suspect in Staten Island hitting a 54-year-old store employee on the head with a glass bottle and choking him. A tweet from May 4th showed a man in the Bronx punching a 77-year-old man in the face, knocking him over.

In an interview with Gothamist, NYPD Deputy Commissioner of Public Information John Miller said the department posts imagery like this to engage the public.

Sometimes, one way to engage is to show either the incident or the brutality of the incident or the wanton nature of the incident, where you can tell these people are firing guns on a crowded street, Miller said. And there are children in the background. There are mothers in the background. There are elderly people in the background. There's a park behind them and they just don't care where those bullets go. And sometimes, that in and of itself will add power to the imagery that goes with it.

Miller added that New York City is still one of the safest big cities in the country by most measures, with the number of shootings down from one year ago, but still up from pre-pandemic levels.

But sociologist Barry Glassner, who wrote Culture of Fear: Why Americans Are Afraid of the Wrong Things, told Gothamist that the proliferation of images and videos of crimes in progress could make people feel more afraid than the crime statistics warrant without necessarily helping catch the perpetrators of crimes.

Any added value for actually succeeding at the police work, I would be pretty confident is not as great as the damage done by all these violent videos circulating around and creating more fear in the population and more sense that there's crime everywhere you turn, Glassner said. And that it's very scary.

Glassner said the more people are inundated with the prevalence of crime the more they see violent imagery online, such as the footage the NYPD shares the more anxious the general public becomes, regardless of statistics.

He also said the recordings of crimes in progress present an incomplete picture.

The recording of the event by the police presents one perspective, he said. [It] doesn't capture the full context of what occurred. And so people watch this and it seems strictly factual and complete, and it can't be its not possible.

When determining what to share, and how to share it, Miller said officers comb through security footage and try to find identifiable images of the particular suspect. In many cases, he said, the department will share video footage so the public can see how a suspect might walk or move. If a victim is involved, he said, officers notify them about disseminating footage with their faces blurred.

The deal with videos and imagery of violence that we put out has to do with a different set of obligations, he said. And we shouldn't be considering whether it increases fear or not. Our first obligation is to the victim of that crime. The victim of that crime, above all considerations of perception and public relations and spin, the victim of that crime deserves justice.

Read the original:

Turn the other tweet: NYPD not heeding Adams' call to censor violence on social media - Gothamist

The impact of artificial intelligence on iGaming – Business Insider Africa

The internet gaming industry leverages AI technology to power many things, such as algorithms that guide users to games they may prefer. They collect data based on your actions to forecast exactly what youre interested in to make things easier and more convenient for you, for example. But it doesnt end there, and this post will cover other ways artificial intelligence impacts iGaming.

It offers a more personalised experience

Whether youre playing online games for real money or not, most people desire and expect a tailor-made experience when engaging in the activity. As mentioned previously, AI-based algorithms can collect data to gain insight into player habits and preferences. With these specifics, they can generate helpful projections that will allow the operator or deliver a more personalised experience to the users.

For instance, if you regularly play at online casinos, some may recommend specific titles to you as soon as you log in to the website. The algorithm brings these suggestions, which likely depend on the AI to function.

It can improve online safety

As more and more players engage in online gambling activities through their computers or mobile devices, security measures have become necessary to ensure the gaming environments safety. Artificial intelligence has been shown to help safeguard players privacy and protect their data while processing payment transactions.

A perfect example of this is the SSL encryption. It's essentially a type of digital mechanism that helps in protecting sensitive information during transactions. It ensures that data stays out of the hands of third parties and prevents fraudulent activities like hacking. The most significant concern of players and bettors when playing online is the security of their account details. Hence, most online casinos use artificial intelligence to prevent data breaches or the unauthorised exposure of sensitive information.

It helps against cheating

Thanks to artificial intelligence, online gaming websites are able to detect fraudsters and cheaters effectively. When utilised, the members behavioural patterns are collected, which may determine whether a player is cheating. As a result, players will always be on equal ground, ensuring that everyone is playing by the rules of their desired games and are unable to manipulate the outcome using questionable tactics or additional software.

AI has and continues to pave the way for a better experience in online gambling. With individualised options and recommendations, enhanced security measures, prevention of cheating, and even simulating how real players would react in a game, artificial intelligence has undoubtedly made iGaming more engaging than it was in the past. As the technology behind it continues to evolve, the games will only get more immersive.

See the article here:
The impact of artificial intelligence on iGaming - Business Insider Africa

How Artificial Intelligence Is Changing The Law Industry for The Better – Legal Scoops

Artificial intelligence has been making major headway in many areas of the workforce, bringing considerable changes to fields such as architecture, agriculture, sports analytics, etc., and even the law industry. Improving so many fields of work, how has artificial intelligence changed the law industry for the better?

Artificial intelligence has improved the law industry in several ways, such as data processing and legal research, generating the content, and decreasing overall stress. In addition, artificial intelligence has made the law industry more productive, giving lawyers more time to focus on their clients and cases rather than tedious paperwork and information.

The rest of this article will describe how artificial intelligence has improved the law industry.

Artificial intelligence (AI) has improved data processing and research in law practices. It can help with the discovery process, which is one of the most time-consuming aspects of the practice and can present many challenges for attorneys. AI can help facilitate this process by sorting through data and information to locate relevant documents. AI can also sort through various ways the topic may be referenced.

Artificial intelligence can improve data processing as it is programmed to work with patterns and large amounts of data. AI can:

When it comes to legal research, AI cant replace human actions. However, it can improve them by simplifying the processs more time-consuming, data-related aspects. It gives law teams better research capabilities, using algorithms to sort a significant amount of information.

Artificial intelligence can help with case building and content production, analyzing draft arguments, and putting together legal contracts. Although the ability to generate content is limited, it can still help with the initial stages of writing legal contracts and other documents. When it comes to analyzing draft arguments, AI can assist in identifying weak aspects of the argument and locating inaccurate information.

Computers that have been programmed with legal-based information can also assist in the reviewing of contracts and other documents. AIs ability to process large amounts of information can help attorneys find errors and problems in contracts and other legal documents that may have otherwise been missed. By doing this, artificial intelligence can speed up the process of reviewing documents and make it more consistent and accurate.

In addition to the more technical aspects of artificial intelligence, it can assist with case building by improving interdepartmental communication. For example, AI can help legal teams avoid long, arduous meetings while keeping them moving forward on the same page.

Though AI is taking up the more time-consuming tasks, artificial intelligence wont replace lawyers. However, it will instead give them time to focus on other tasks, allowing them to focus on case building and personal relationships with clients.

The legal profession is known for taking up much time, and lawyers have high rates of stress and depression, which can negatively affect their work. Artificial intelligence could significantly improve morale by reducing the long hours and stress that come with the field.

Some people fear artificial intelligence is taking over careers, displacing workers for more high-speed, consistent technology. However, it is quite the opposite in many fields, such as the law industry.

Artificial intelligence holds significant benefits for legal teams and lawyers, decreasing the time spent on data processing and legal research and improving morale and time management. In addition, by not focusing on tedious tasks, lawyers can devote more energy to case building and the more personal aspects of the legal process.

As the field of technology continues to change, seeing new improvements such as artificial intelligence, many other professions will begin to see significant benefits. The law industry is no exception, and artificial intelligence holds a great deal for the future of the legal field.

The senior editor of Legal Scoops, Jacob Maslow, has founded several online newspapers including Daily Forex Report and Conservative Free Press

More here:
How Artificial Intelligence Is Changing The Law Industry for The Better - Legal Scoops

Manufacturing, Artificial Intelligence, And Augmented Reality: Integration Advances – Forbes

Artificial Intelligence

The manufacturing floors and warehouses have a number of complex issues. Safety is always the first concern, but efficiency follows. Creating workflows is a longstanding technique for improving both factors, but how to improve the results? Artificial intelligence (AI) has begun, in recent years, to creep into processes in manufacturing as it has in all other areas of business. Advanced in AI, networking, and edge devices are bringing another modern technology into the mix Augmented Reality (AR). The combination of AI & AR are the latest attempt to increase safety and productivity.

Shop floors are busy and dangerous places. Since the start of the industrial revolution, there has always been tension between the needs for efficiency and safety. In the early days, safety wasnt that important. However, in the last century, that has come to the forefront and most companies lead with that in their messaging, even if there are some that only do it there. However, productivity still matters. Advanced technologies of the last decades have often been focused on how to improve both factors at the same time.

When it comes to AI, early applications were on two, very separate, extremes. First, early vision AI would look for simple safety problems, such as missing hardhats. Second, AI was used to figure out optimal plant processes and process flows. Due to recent advances, systems can begin to integrate those features and others. One way thats helping do that is AR.

The popular culture seems conversant with virtual reality (VR), as it has been used as a trope in many movies and games have begun moving towards both imitating VR and working in it. While AR had a pop culture moment years ago, with glasses put forward by a large company, that was a failure, and many havent thought much about it since. Simply put, AR is the concept of augmenting reality with technology additions. One simple example is the heads-up display, used in fighter planes for years and in some cars, in a more limited fashion, in more recent years. A sector that has been adopting VR is in surgery, with VR, AI and robots extending the abilities of surgeons to know more as they operate.

A couple of years ago, I covered a company where an inspector would wear a headset through a construction site. The images were stored and then compared later to blueprints. Part of the fun of this column is seeing the changes over time. Technology has advanced, both in AI and networking. What those advanced are beginning to accomplish is a more immediate link between the backend AI systems and the person in the field.

One of the first ways AR is helping in manufacturing is simple. Take a warehouse. When theres a lot of products moving through, checking inventory can be slow. With an AR system, a person can look at a group of boxes, the backend system can count the boxes. Thats just simple visual AI. However, that count can be integrated with inventory and shipping systems, the count can be compared to the expected count, and a display can pop up, for instance, telling the person to look for two more boxes that should also have been in the order.

Another example is in physical safety. Its one thing to notice if someone is wearing a hardhat. Thats now in the must have, more basic visual identification tools. Lets extend that. The visual system can capture warning signs requiring hardhats, gloves, and other safety measures, or even use the GPS information and database content about the manufacturing floor to know what safety measures are required at each place on the floor. Gloves are a good example of how the power of AI can be used to improve safety on the manufacturing floor, said Dr. Hendrik Witt, Chief Product

AiStudio analyzing gloves

Officer, TeamViewer. AI can now not only detect whether you are wearing gloves, but whether you are wearing the right type of gloves for the specific situation, and then immediately notify the worker for a safety correction.

A final example is another safety issue. Background process analytics, using either AI or standard analytics, can be run to estimate potential for fatigue. A person lifting 10 lb. boxes doesnt get as tired as fast as the same person lifting 40 lb. boxes. Reminding people when to take breaks is just as important to safety as making sure the people are working safely.

An aspect of AI/VR that has also slowed adoption is the need for companies to hire experts in that function, experts who arent in vast supply and who cost more. Companies such as TeamViewer are working to simplify the training of the systems, making them as low-code as possible. This is what other cycles of technology have had to do in order to widen adoption. Its also something that is one of my key soapboxes. Very few companies need to have the magical data scientist. What needs to happen is the building of systems that can talk to the line users in the language they know.

Modern augmented reality is about more than the important goal of improving safety, said Witt. Its about understanding process flows, about integrating AR systems with more of the full range of ERP and other backend software, not only with AI. Its about having systems that non-specialists can use to accomplish their own tasks.

The aforementioned glasses were a fad, something cool for a small audience. AR is now being applied to more focused arenas, including in the manufacturing sector. That focus may finally bring the ROI to an investment in AR and AI that will spread both more widely.

See the original post here:
Manufacturing, Artificial Intelligence, And Augmented Reality: Integration Advances - Forbes

Why artificial intelligence in the NHS could fail women and ethnic minorities – iNews

Artificial intelligence (AI) could lead to UK health services that disadvantage women and ethnic minorities, scientists are warning.

They are calling for biases in the systems to be rooted out before their use becomes commonplace in the NHS.

They fear that without that preparation AI could dramatically deepen existing health inequalities in our society.

i can reveal that a new government-backed study has found that artificial intelligence models built to identify people at high risk of liver disease from blood tests are twice as likely to miss disease in women as in men.

The researchers examined the state of the art approach to AI used by hospitals worldwide and found it had a 70 per cent success rate in predicting liver disease from blood tests.

But they uncovered a wide gender gap underneath with 44 per cent of cases in women missed, compared with 23 per cent of cases among men.

This is the first time bias has been identified in AI blood tests.

AI algorithms are increasingly used in hospitals to assist doctors diagnosing patients. Our study shows that, unless they are investigated for bias, they may only help a subset of patients, leaving other groups with worse care, said Isabel Straw, of University College London, who led the study, published in the journal BMJ Health & Care Informatics.

We need to be really careful that medical AI doesnt worsen existing inequalities.

When we hear of an algorithm that is more than 90 per cent accurate at identifying disease, we need to ask: accurate for who? High accuracy overall may hide poor performance for some groups.

Other experts, not involved in the study, say it helps shine a light on the threat posed to health equality as AI use, already quite common in the US, starts to take off in the UK.

Brieuc Lehmann, a UCL health data science specialist and co-founder of expert panel on Data for Health Equity, says the use of AI in healthcare in the UK is very much in its infancy but is likely to grow rapidly in the next five to 10 years.

Its absolutely crucial that people get a handle on AI bias in the next few years. With the ongoing squeeze on NHS budgets, there will be growing pressure to use AI to reduce costs, he said.

If we dont get a hold on biases, there will be a temptation to deploy AI tools before weve adequately assessed their impact, which carries with in the risk of worsening health inequalities.

Lauren Klein, co-author of the book Data Feminism and an academic at Emory University in Atlanta in the US, said the liver disease study showed how important it was it get AI systems right.

Examples like this demonstrate how a failure to consider the full range of potential sources of bias can have life or death consequences, she said.

AI systems are predictive systems. They make predictions about whats most likely to happen in the future on the basis of whats most often happened in the past. Because we live in a biased world, those biases are reflected in the data that records past events.

And when that biased data is used to predict future outcomes, it predicts outcomes with those same biases.

She gave the example of a major tech firm that developed a CV screening system as part of its recruitment process.

But because the examples of good CVs came from existing employees, who were predominantly men, the system developed a preference for the CVs of male applicants, disadvantaging women and perpetuating the gender imbalance.

AI systems, like everything else in the world, are made by humans. When we fail to recognise that fact, we leave ourselves open to the false belief that these systems are somehow more neutral or objective than we are, Dr Klein added.

It is not the AI in itself which is biased as it only learns from the data it is given, experts stress but rather the information it is given to work with.

David Leslie, director of ethics and responsible innovation research at the Alan Turing Institute, is concerned that AI may make things worse for minority groups.

In an article for the British Medical Journal last year, he warned that: The use of AI threatens to exacerbate the disparate effect of Covid-19 on marginalised, under-represented, and vulnerable groups, particularly Black, Asian, and other minoritised ethnic people, older populations, and those of lower socioeconomic status.

AI systems can introduce or reflect bias and discrimination in three ways: in patterns of health discrimination that become entrenched in datasets, in data representativeness [with small sample sizes in many groups often very small], and in human choices made during the design, development, and deployment of these systems, he said.

Honghan Wu, associate professor in health informatics at University College London, who also worked on the study about blood test inequalities, agrees that AI models can not only replicate existing biases but also make them worse.

Current AI research and developments would certainly bake in existing biases from the data they learnt from and, even worse, potentially induce more biases from the way they were designed, he said.

These biases could potentially accumulate within the system, which lead to more biased data that is later used for training new AI models. This is a scary circle.

He has just completed a study looking at four AI models based on more than 70,000 ICU admissions to hospitals in Switzerland and the US, due to be presented at the European Conference on Artificial Intelligence in Austria next month.

This found that women and non-white people with kidney problems had to be considerably more ill than men and white people to be admitted to an ICU ward or recommended for an operation, respectively.

And it found the AI models exacerbated data embedded inequalities significantly in three out of eight assessments, one of which was more than nine times worse.

AI models learn their predictions from the data, Dr Wu said. We say a model exacerbates inequality when inequalities induced by it were higher than those embedded in the data where it learned from.

But some experts say there are also reasons for optimism, because AI can also be used to actively combat bias within a health system.

Ziad Obermeyer, of the University of California at Berkeley, who worked on a landmark study that helped to explain how AI could introduce racial bias (see box below), said he had also shown in separate research that an algorithm can find causes of pain in Black patients that human radiologists miss.

Theres increasing attention from both regulators who oversee algorithms and just as importantly from the teams building algorithms, he told i.

So I am optimistic that we are at least moving in the right direction.

Dr Wu, at UCL, is working on ways to solve AI bias but cautions this area of research is still in its infancy.

AI could lead to a poorer performing NHS for women and ethnic minorities, he warns.

But the good news is, AI models havent been used widely in the NHS for clinical decision-making, meaning we still have the opportunity to make them right before the poorer performing NHS happens.

Using the wrong proxy, or variable, to predict risk is probably the most common way in which AI models can magnify inequalities, experts say.

This is demonstrated in a landmark study, published in the journal Science, which found that a category of algorithms that influences health care decisions for over a hundred million Americans shows significant racial bias.

In this case, the algorithms used by the US healthcare system for determining who gets into care management programmes were based on how much the patients had cost the healthcare system in the past and using that to determine how at-risk they were from their current illness.

But because Black people typically use healthcare less in America, in part because they are more likely to distrust doctors, the algorithm design meant they had to be considerably more ill than a white person to be eligible for the same level of care.

However, by tweaking the US healthcare algorithm to use other variables or proxies to predict patient risk the researchers were able to correct much of the bias that was initially built into the AI model, reducing it by 84 per cent.

And by correcting for the health disparities between Black and white people, the researchers found that the percentage of Black people in the automatic enrollee group jumped from 18 per cent to 47 per cent.

The NHS is aware of the problem and is taking a number of steps. These include:

Go here to read the rest:
Why artificial intelligence in the NHS could fail women and ethnic minorities - iNews

WHO and I-DAIR to partner for inclusive, impactful, and responsible international research in artificial intelligence (AI) and digital health – World…

The World Health Organization (WHO) and the International Digital Health and AI Research Collaborative (I-DAIR) have signed a Memorandum of Understanding (MoU) outlining their joint efforts to advance the use of digital technologies for personal and public health globally.

Through this agreement, WHO and I-DAIR will work together to harness the digital revolution towards urgent health challenges, while emphasizing equity and greater participation from Low and Middle-Income Countries (LMIC) in the research and development and governance of the digital health and AI space, with particular focus on the inclusion of young researchers and entrepreneurs.

The partnership will focus on achieving these common goals through a multi-faceted approach focusing on promoting scientific cross-domain/cross-border collaboration and implementing innovative digital health long-term solutions, consistent with WHO recommendations and interoperability standards.

The joint activities include inter alia the promotion and the development of new norms and guidelines for the governance of health data as a public good, the building of evidence cases for thoughtful investments in digital health globally, and the strengthening of stakeholders capacities - for instance via the common elaboration of the WHO digital health competency framework.

###

I-DAIR is a multi-stakeholder platform for enabling global research collaborations on digital health and for convening stakeholders to develop global public goods aimed at solving issues around the inclusive, equitable, and responsible deployment of data and AI for health. For more information about our projects, we invite you to visit our website.

See more here:
WHO and I-DAIR to partner for inclusive, impactful, and responsible international research in artificial intelligence (AI) and digital health - World...

India goes big on Artificial Intelligence in defence; 75 products to be launched on July 11 – Asianet Newsable

New Delhi, First Published Jul 9, 2022, 11:08 AM IST

New Delhi: Aiming to make India a self-reliant country in the field of technologies and innovation, the defence ministry will launch 75 Artificial Intelligence products on July 11 at the first ever Artificial Intelligence in Defence (AIDef) symposium and exhibition to be launched by its minister Rajnath Singh.

Talking to media persons here on Friday, Defence Secretary Ajay Kumar said, We are going to have a programme where a total of 75 products powered by Artificial Intelligence will be launched on Monday. It would be possibly the largest event. The launch of 75 products coincided with the 75 years of Aazadi Ka Amrit Mahotsav.

Highlighting the AI in the defence forces, Kumar said that the nature of modern warfare has changed, and artificial intelligence has played a significant role in all forms.

Also read:Indian Army wants to buy 29,762 night sights for assault rifles

The products to be launched on Monday are developed by the Services Army, Navy and Air Force, DRDO, DPSUs, Startups and the private sector. Some of the 75 products would be for dual-use purposes, including civil use.

The products are in the domains of automation, unmanned, robotics systems, cyber security, human behaviour analysis, intelligent monitoring system, logistics and supply chain management, speech/voice analysis and Command, Control, Communication, Computer & Intelligence, Surveillance & Reconnaissance (C4ISR) systems and Operational Data Analytics.

Besides, a total of 100 AI-enabled products are at various stages of development.

On July 11, the minister will felicitate two top defence exporters from the public and private sectors.

Also read:Fighter jets for INS Vikrant to be bought the Rafale way

In reply to a question, Additional Secretary Sanjay Jaju informed that the defence exports had crossed the highest ever figure of Rs 13,000 crore in 2021-22, with 70 per cent contribution coming from the private the remaining 30 per cent from the public sector.

Ajay Kumar further stated that an AI task force on defence was established in 2018 to provide a road map for promoting AI in defence.

Acting on its recommendations, a Defence Artificial Intelligence Council was set up headed by the defence minister. The council is spearheading the effort.

Last Updated Jul 9, 2022, 11:08 AM IST

Link:
India goes big on Artificial Intelligence in defence; 75 products to be launched on July 11 - Asianet Newsable

Artificial Intelligence in Medical Diagnostics Market Worth $9.38 Billion by 2029 – Exclusive Report by Meticulous Research – GlobeNewswire

Redding, California, July 07, 2022 (GLOBE NEWSWIRE) -- According to a new market research report, Artificial Intelligence in Medical Diagnostics Market By Component (Software, Services), Specialty (Radiology, Cardiology, Neurology, Obstetrics/Gynecology, Ophthalmology), Modality (MRI, CT, X-ray, Ultrasound), End User (Hospital, Diagnostic Center) - Global Forecast to 2029,' published by Meticulous Research, the AI in medical diagnostics market is expected to grow at a CAGR of 36.2% during the forecast period to reach $9.38 billion by 2029.

DownloadFree Sample Report Now @https://www.meticulousresearch.com/download-sample-report/cp_id=5312

AI in medical diagnostics consists of AI software and services that aid healthcare professionals in identifying the diagnosis of different diseases. AI-based software solutions can analyze the data from a diagnostic procedure and either help triage patients by flagging abnormal medical images or suggest the healthcare professional a suitable diagnosis. AI in medical diagnostics integrates deep learning, data insights, and algorithms to detect life-threatening and critical diseases. It automates the diagnosis process and reduces the workload on healthcare professionals.

The main factors driving the AI in medical diagnostics market are the growing need for the adoption of AI in medical diagnosis due to the high rate of errors in medical diagnosis, shortage of healthcare professionals, and increasing prevalence of chronic diseases. Furthermore, the high growth potential in emerging economies and the growing number of cross-industry partnerships & collaborations are expected to provide significant growth opportunities for this market.

However, the reluctance to adopt AI technologies due to a lack of trust is expected to restrain the growth of this market to a notable extent. In addition, factors such as regulatory barriers and privacy and security concerns regarding patient data are the major challenges to the growth of this market.

The Impact of COVID-19 on the Artificial Intelligence in Medical Diagnostics Market

The outbreak of the COVID-19 pandemic in 2020 was a global public health challenge. The number of cases was skyrocketing, and many countries had a huge burden on the health system. The COVID-19 disease mainly affects the lungs of the patients. Hence, cardiothoracic imaging in COVID-19 cases is a common diagnostic practice to identify the severity of the disease. The number of research studies using AI techniques to diagnose COVID-19 rapidly increased in 2020. Many studies were focused on describing the diagnosis of COVID-19 from chest CT images using AI technology. Several studies proved that AI models might be as accurate as experienced radiologists in diagnosing COVID-19.

Speak to our Analysts to Understand the Impact of COVID-19 on Your Business:https://www.meticulousresearch.com/speak-to-analyst/cp_id=5312

CT scans were identified as the key modality for diagnosing COVID-19 at the onset of the disease. Healthcare professionals identified the severity of the disease from features like shadows over the patients lungs. A single patient had approximately 300 CT images, which took a doctor a lot of time to analyze with the naked eye. Also, radiologists needed to compare with earlier scans, increasing pressure on the healthcare staff. In such situations, AI-based systems can analyze CT images within 20 seconds, with an accuracy rate above 90% (Source: Nature Biomedical Engineering Journal). In addition, UC San Diego Health (U.S.) engineered a new method to expedite the diagnosis of pneumonia, a condition associated with severe COVID-19. This early detection helps doctors quickly triage patients to appropriate levels of care even before the COVID-19 diagnosis is confirmed. In May 2020, Mount Sinai Health System (U.S.) implemented artificial intelligence to analyze COVID-19 patients for rapid diagnosis based on CT scans and patient data. Thus, the advantages offered by AI technology have increased its adoption in medical diagnostics during the pandemic.

The AI in medical diagnostics market is segmented based on component, specialty, modality, end user, and geography. The study also evaluates industry competitors and analyzes the market at the country level.

Based on component, in 2022, the software segment is estimated to account for the largest share of the AI in medical diagnostics market. The large market share of this segment is attributed to the high demand for AI-based software solutions to deliver a quick and accurate medical diagnosis, the growing number of new software approvals & launches, and the rising shortage of specialists.

Based on specialty, in 2022, the radiology segment is estimated to account for the largest share of the AI in medical diagnostics market. The large market share of this segment is attributed to the growing demand for AI in medical imaging, increasing chronic disorders, an increasing number of new software products for AI in radiology, and the increasing global shortage of radiologists. In addition, the benefits of AI for radiologists in terms of non-interpretive data, such as reducing noise in medical images, creating high-quality images from lower doses of radiation, enhancing magnetic resonance image quality, and automatically assessing image quality, also supports the growth of this segment.

Based on modality, in 2022, the CT-scan segment is estimated to account for the largest share of the overall AI in medical diagnostics market. The large market share of this segment is attributed to the advantages that AI-based solutions offer, such as improved operational efficiency, reduced noise in medical images, and reduced patient backlogs and wait times. Additionally, the increasing patient pool prescribed for CT scans and growing numbers of products specific for CT scans supports the growth of this segment.

Quick Buy Artificial Intelligence in Medical Diagnostics Market Research Report: https://www.meticulousresearch.com/Checkout/76337122

Based on end user, in 2022, the hospitals segment is estimated to account for the largest share of the AI in medical diagnostics market. The large share of this segment is attributed to the increasing number of patients undergoing diagnostics procedures in hospitals, the robust financial capabilities of large hospitals to acquire high-cost AI-based software & services, the growing shortage of physicians, and the outbreak of the COVID-19 pandemic.

Based on geography, in 2022, North America is estimated to account for the largest share of the AI in medical diagnostics market, followed by Europe and Asia-Pacific. Some of the major factors driving the growth of the North American AI in medical diagnostics market include technological developments, increasing number of new product approvals, a high adoption rate of AI in healthcare, the presence of key market players, and established IT infrastructure in the healthcare sector. However, Asia-Pacific is slated to register the highest growth rate in the AI in medical diagnostics market during the forecast period. The high market growth in Asia-Pacific is attributed to the high growth opportunity due to the increasing prevalence of various chronic & infectious diseases, the increasing number of AI-based startups, especially in China and India, increasing funding, and a large potential of AI in addressing the gap in the healthcare infrastructure in the region

The report also includes an extensive assessment of the component, specialty, modality, end user, and geography, and key strategic developments adopted by leading market participants in the industry over the past four years (20192022). In recent years, the AI in medical diagnostics market has witnessed numerous product launches, approvals, agreements, collaborations, partnerships, and acquisitions.

The key players profiled in this market study are Siemens Healthineers AG (Germany), GE Healthcare (U.S.), Aidoc Medical Ltd. (Israel), International Business Machines Corporation (U.S.), AliveCor, Inc. (U.S.), VUNO Inc. (South Korea), Digital Diagnostics Inc. (U.S.), NovaSignal Corp. (U.S.), Riverain Technologies (U.S.), NANO-X IMAGING LTD (Israel), Imagen Technologies (U.S.), Koninklijke Philips N.V. (Netherlands), Agfa-Gevaert Group (Belgium), HeartFlow, Inc. (U.S.), and Arterys Inc. (U.S.).

To gain more insights into the market with a detailed table of content and figures, click here:https://www.meticulousresearch.com/product/artificial-intelligence-in-medical-diagnostics-market-5312

Scope of the Report:

Artificial Intelligence in Medical Diagnostics Market, by Component

Artificial Intelligence in Medical Diagnostics Market, by Specialty

Artificial Intelligence in Medical Diagnostics Market, by Modality

Artificial Intelligence in Medical Diagnostics Market, by End User

Artificial Intelligence in Medical Diagnostics Market, by Geography

Request Free Customization of Report @https://www.meticulousresearch.com/request-customization/cp_id=5312

Related Reports:

Healthcare Artificial Intelligence Market by Product and Services (Software, Services), Technology (Machine Learning, NLP), Application (Medical Imaging, Precision Medicine, Patient Management), End User (Hospitals, Patients) - Global Forecast to 2027

https://www.meticulousresearch.com/product/healthcare-artificial-intelligence-market-4937

Medical Image Management Market by Product {PACS [Departmental (Radiology, Mammography, Cardiology), Enterprise], VNA [(On-premise, Hybrid), [Vendor (PACS, Independent Software, Infrastructure)], AICA, Universal Viewer} and End User Global Forecast to 2027

https://www.meticulousresearch.com/product/medical-image-management-market-4761

Precision Medicine Software Market by Deployment Mode (On-premise, Cloud-based), Application (Oncology, Pharmacogenomics, CNS), End User (Healthcare Providers, Research, Academia, Pharma, Biotech) - Global Forecast to 2028

https://www.meticulousresearch.com/product/precision-medicine-software-market-5011

About Meticulous Research

Meticulous Research was founded in 2010 and incorporated as Meticulous Market Research Pvt. Ltd. in 2013 as a private limited company under the Companies Act, 1956. Since its incorporation, the company has become the leading provider of premium market intelligence in North America, Europe, Asia-Pacific, Latin America, and the Middle East & Africa.

The name of our company defines our services, strengths, and values. Since the inception, we have only thrived to research, analyze, and present the critical market data with great attention to details. With the meticulous primary and secondary research techniques, we have built strong capabilities in data collection, interpretation, and analysis of data including qualitative and quantitative research with the finest team of analysts. We design our meticulously analyzed intelligent and value-driven syndicate market research reports, custom studies, quick turnaround research, and consulting solutions to address business challenges of sustainable growth.

Contact:Mr.Khushal BombeMeticulous Market Research Inc.1267WillisSt,Ste200 Redding,California,96001, U.S.USA: +1-646-781-8004Europe : +44-203-868-8738APAC: +91 744-7780008Email-sales@meticulousresearch.comVisit Our Website:https://www.meticulousresearch.com/Connect with us on LinkedIn-https://www.linkedin.com/company/meticulous-researchContent Source: https://www.meticulousresearch.com/pressrelease/538/artificial-intelligence-in-medical-diagnostics-market-2029

Read this article:
Artificial Intelligence in Medical Diagnostics Market Worth $9.38 Billion by 2029 - Exclusive Report by Meticulous Research - GlobeNewswire