The Patriot Act: Mass Surveillance Before and After 9/11 – Privacy News Online

The events of 9/11 shook the world, and rightly so the devastation left US citizens stricken with fear. We were willing to do anything to protect ourselves and our country from another terrorist attack.

Anti-terrorism legislature was introduced left and right. It didnt take long to pass the US Patriot Act, which provided the government with broad surveillance capabilities. Now, any group, individual, or entity known to be conspiring with terrorists could be monitored legally. It was the solution and comfort Americans needed at the time.

Unfortunately, the Act included several sunset clauses making it legal for the government to perform mass surveillance on US citizens without actual proof of ties to terrorism. This isnt the first time the US has spied on private citizens without us knowing and it wont be the last.

Come with me on a trip down memory lane, looking back at US surveillance over the past 8 decades. Youll discover how personal privacy has taken a back seat, and, importantly, whats happening to fix the problem.

The United and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT) Act was established after the terrorist attacks on the World Trade Center buildings in New York on September 11, 2001. While the acronym is meaningful, its quite a mouthful, so most people simply refer to it as the Patriot Act.

Officials discovered most of the planning for the attacks was conducted through internet communication and looked for a way to make laws regarding surveillance of terrorist suspects more defined. Federal laws related to terrorism became more defined under the Act, making it possible for the federal government to track and seize money and accounts connected to terrorist factions or organizations. The government was also allowed to divert federal funds to help victims of terrorist attacks, and anti-terrorism funding was increased.

Understanding the law with all its loopholes and clauses can be difficult, so lets break down some of the pros and cons of the Patriot Act.

Several of the sunset clauses included in the Patriot Act were deemed invasive by privacy rights activists, government officials, and private citizens in the US. Sunset clauses expire if they arent reauthorized after a specific time frame, generally around 3-10 years.

The following sketchy Patriot Act clauses were extended by the USA Freedom Act in 2015, but expired in 2020:

It wasnt only laws under scrutiny. During the mass hysteria of 9/11, the US government detained legal US citizens and immigrants who werent suspected of or charged with a crime. This led to a public outcry regarding these individuals constitutional rights, as well as a call to release people with no proven connection to the 9/11 attacks. Unfortunately, it had already created a pronounced ethnic divide and made some immigrants targets.

The Patriot Act wasnt reauthorized, so it expired on March 15, 2020. Still, surveillance hasnt stopped, not by a long shot. The USA Freedom Act, while a vast improvement on the Patriot Act, still provides plenty of loopholes.

The USA Freedom Act came into effect in 2015, to right the wrongs of the Patriot Act but it still leaves a few loopholes allowing government surveillance with little to no proof of illegal or terrorist activity.

When Snowden disclosed the CIAs bulk collection of private citizens phone communications, it became clear our conversations werent so private. The USA Freedom Act banned the bulk collection of citizens personal data. It led to the reform of the FISA court and gave more power to the Amici role created in 2013, to ensure surveillance is executed legally and civil liberties are upheld.

The Freedom Act doesnt include changes to Section 702 of the 2008 FISA Amendments Act, which includes a law allowing the government to conduct mass surveillance. It also lacks concrete procedures for deleting information that has nothing to do with the target suspect. In addition, the language still makes it possible for law enforcement or government officials to collect information with minimal proof a threat exists.

Just as troubling, it allows for a 72-hour hold of any person the government has reasonable cause to suspect, enabling the Attorney General to get a new surveillance order. While this is a step up from undefined detention, suspects can still be detained without any real proof of wrongdoing for up to 3 days.

The US Government has always conducted surveillance on private citizens in one form or another. It only intensified after the 9/11 attacks, giving lawmakers the ability to make most of its surveillance legal at the cost of citizens privacy. Broad surveillance more or less started in the 50s in the US.

The government intended to use FISA to gain the power to spy on foreign agents and groups, but it ended up broadening surveillance reach to US citizens. Soon, the government was secretly spying on the Average Joe, sidestepping citizens constitutional right to privacy. Later, as technology advanced, the government used wiretapping to monitor electronic communication, with reasonable belief counting as probable cause.

The timeline below provides a snapshot of some of the most relevant acts regarding surveillance and privacy rights in the US.

The Whistleblower Protection Act (WPA) was created in 1989 to protect Federal employees who disclose tangible evidence of the following by the government:

The Act prevents retaliation like demotions, pay cuts, or dismissals for whistleblowers, and provides legal support if theyre retaliated against. More importantly, it allows whistleblowers to make disclosures confidentially. So, why did Edward Snowden seek asylum in Russia? He was a federal employee disclosing evidence of a violation of laws, which should be protected.

Well, his method of disclosure was a bit loud and bypassed all the proper channels Snowden stole official documents and leaked them to the British newspaper, The Guardian. Regardless of his reasoning for not reporting it through US channels, his leak to a foreign entity branded him a spy and wanted for espionage.

Some people have expressed worry the former computer intelligence consultant will switch sides as hes a permanent resident in Russia. Considering 58% of state-sponsored cyberattacks in the US in 2021 originated in Russia, the thought is alarming. It seems, however, Snowden himself put the idea to rest in his interview with NPR, making it clear he doesnt intend to cooperate with Russia on any cyberattacks or government activities.

The US government monitors what you do online if they have a reason to suspect youre doing something you shouldnt be, but they arent the only ones. ISPs, cybercriminals, websites, and even your neighbor Ted could monitor your online habits. Thankfully, you can avoid online surveillance with PIA VPN.

PIA provides secure tunneling protocols and robust encryption to shield your traffic from anyone who may be snooping. No one can spy on you, because our encryption hides your online activities and we change your IP address to make sure youre invisible.

Our MACE feature also blocks trackers, malware, and WebRTC at the DNS level before they reach your device, so youre safe from anything that could compromise your privacy. A VPN cant protect your devices against malware and phishing attacks, though. Its still up to you to be vigilant and follow basic online safety habits.

The Patriot Act was created to monitor and deter terrorist activity in the US by expanding the reach of law enforcements investigatory powers. It also covers the punishment of terrorists and gives law enforcement the right to perform surveillance on anyone who is suspected of being involved in terrorist activities without their knowledge.

Unfortunately, people who arent criminals may be treated like one. The Patriot Act only requires reasonable belief in order to monitor your online activity. Thankfully, you can avoid surveillance. PIA uses strong encryption to scramble your data and make it unreadable to anyone who may be watching.

The most commonly cited violation is the Fourth Amendment, which states the government cant unknowingly conduct a search without a warrant or probable cause to believe a criminal act took place. Violations of the Fourth Amendment are considered a gross invasion of privacy.

Many people believe it violates most of the Bill of Rights, especially rights such as free speech, public trial, due process, and freedom from self-incrimination. PIA VPN can help you protect those rights by restoring your online privacy.

The USA Freedom Act replaced The Patriot Act when it expired on March 15, 2020. While it was a step in the right direction, it still allows for several loopholes when it comes to US citizens rights. Mainly, it doesnt include regulations for the deletion of information on non-suspects obtained during surveillance, so irrelevant personal data can be kept indefinitely.

This means the government could still collect and store citizens data, even without you knowing. To combat data tracking, use PIA VPN. We never collect usage logs, so even if authorities asked, we wouldnt have any data to give them.

The USA Freedom Act is similar but it has some vast improvements over the Patriot Act, including a ban on bulk collection of citizens personal data and reform of the Amici role in the FISA court. Amici now watches carefully to make sure any search or surveillance is executed legally.

Prevent spying before it becomes an issue by installing PIA on your devices. One subscription to PIA covers up to 10 simultaneous connections, so you can protect every gadget you own. We even have a 30-day money-back guarantee, so its risk-free to test our VPN!

See the original post:
The Patriot Act: Mass Surveillance Before and After 9/11 - Privacy News Online

Why Donald Trump will soon be indicted – Washington Times

OPINION:

It gives me no joy to write this piece.

Even a cursory review of the redacted version of the affidavit submitted in support of the governments application for a search warrant at the home of former President Donald Trump reveals that he will soon be indicted by a federal grand jury for three crimes: Removing and concealing national defense information (NDI), giving NDI to those not legally entitled to possess it, and obstruction of justice by failing to return NDI to those who are legally entitled to retrieve it.

When he learned from a phone call that 30 FBI agents were at the front door of his Florida residence with a search warrant and he decided to reveal this publicly, Mr. Trump assumed that the agents were looking for classified top-secret materials that theyd allege he criminally possessed. His assumptions were apparently based on his gut instinct and not on a sophisticated analysis of the law. Hence, his public boast that he declassified all the formerly classified documents he took with him.

Unbeknownst to him, the feds had anticipated such a defense and are not preparing to indict him for possessing classified materials, even though he did possess hundreds of voluntarily surrendered materials marked top secret. It is irrelevant if the documents were declassified, as the feds will charge crimes that do not require proof of classification. They told the federal judge who signed the search warrant that Mr. Trump still had NDI in his home. It appears they were correct.

Under the law, it doesnt matter if the documents on which NDI is contained are classified or not, as it is simply and always criminal to have NDI in a non-federal facility, to have those without security clearances move it from one place to another, and to keep it from the feds when they are seeking it. Stated differently, the absence of classification for whatever reason is not a defense to the charges that are likely to be filed against Mr. Trump.

Yet, misreading and underestimating the feds, Mr. Trump actually did them a favor. One of the elements that they must prove for any of the three crimes is that Mr. Trump knew that he had the documents. The favor he did was admit to that when he boasted that they were no longer classified. He committed a mortal sin in the criminal defense world by denying something for which he had not been accused.

The second element that the feds must prove is that the documents actually do contain national defense information. And the third element they must prove is that Mr. Trump put these documents into the hands of those not authorized to hold them and stored them in a non-federally secured place. Intelligence community experts have already examined the documents taken from Mr. Trumps home and are prepared to tell a jury that they contain the names of foreign agents secretly working for the U.S. This is the crown jewel of government secrets. Moreover, Mr. Trumps Florida home is not a secure federal facility designated for the deposit of NDI.

The newest aspect of the case against Mr. Trump that we learned from the redacted affidavit is the obstruction allegation. This is not the obstruction that Robert Mueller claimed he found Mr. Trump committed during the Russia investigation. This is a newer obstruction statute, signed by President George W. Bush in 2002, that places far fewer burdens on the feds to prove. The older statute is the one Mr. Mueller alleged. It characterizes any material interference with a judicial function as criminal. Thus, one who lies to a grand jury or prevents a witness from testifying commits this variant of obstruction.

But the Bush-era statute, the one the feds contemplate charging Mr. Trump with having violated, makes it a crime of obstruction by failing to return government property or by sending the FBI on a wild goose chase looking for something that belongs to the government and that you know that you have. This statute does not require the preexistence of a judicial proceeding. It only requires that the defendant has the governments property, knows that he has it and baselessly resists efforts by the government to get it back.

Where does all this leave Mr. Trump? The short answer is: in hot water. The longer answer is: He is confronting yet again the federal law enforcement and intelligence communities for which he has rightly expressed such public disdain. He had valid points of expression during the Russia investigation. He has little ground upon which to stand today.

I have often argued that many of these statutes that the feds have enacted to protect themselves are morally unjust and not grounded in the Constitution. One of my intellectual heroes, the great Murray Rothbard, taught that the government protects itself far more aggressively than it protects our natural rights.

In a monumental irony, both Julian Assange, the WikiLeaks journalist who exposed American war crimes during the Afghanistan and Iraq wars, and Edward Snowden, the former National Security Agency employee who exposed criminal mass government surveillance upon the American public, stand charged with the very same crimes that are likely to be brought against Mr. Trump. On both Mr. Assange and Mr. Snowden, Mr. Trump argued that they should be executed. Fortunately for all three, these statutes do not provide for capital punishment.

Mr. Rothbard warned that the feds aggressively protect themselves. Yet, both Mr. Assange and Mr. Snowden are heroic defenders of liberty with valid moral and legal defenses. Mr. Assange is protected by the Pentagon Papers case, which insulates the media from criminal or civil liability for revealing stolen matters of interest to the public, so long as the revealer is not the thief. Mr. Snowden is protected by the Constitution, which expressly prohibits the warrantless surveillance he revealed, which was the most massive peacetime abuse of government power.

What will Mr. Trump say in his defense to taking national defense information? I cannot think of a legally viable one.

Andrew P. Napolitano is a former professor of law and judge of the Superior Court of New Jersey who has published nine books on the U.S. Constitution.

Originally posted here:
Why Donald Trump will soon be indicted - Washington Times

Venice Review: Laura Poitras All The Beauty And The Bloodshed – Deadline

The scourge of the opioid crisis has been documented in the press and in government reports; the culpability of the Sacklers, the multi-billionaire pharmaceutical family whose former company Purdue made the painkiller Oxycontin, has been successfully dramatized. The Sacklers are everywhere in Laura Poitras gripping documentary All the Beauty and the Bloodshed, but they are supporting players.

At its center is Nan Goldin, the 68-year-old photographer who was prescribed Oxycontin, quickly became addicted to it, found recovery through a replacement drug and then threw her energies into calling the Sacklers to account. Goldin became the most public face of the campaigning group PAIN, leading the charge into museums with Sackler wings, Sackler rooms and Sackler money to shame their well-heeled executives into cutting those ties. The Sacklers might have hijacked Goldins body, but she could at least work to turf them out of the places that held her pictures.

Laura Poitras has a great ear for a dissenting voice. Her first full-length documentary My Country, My Country was about ordinary Iraqis living under U.S. occupation; it brought her critical acclaim and an Oscar nomination. It also put her on the Department of Homeland Security watchlist. Subsequent films have focused on the trials of two drivers who worked for Osama bin Laden, Wikileaks founder Julian Assange and intelligence whistle-blower Edward Snowden in Citizenfour, which won the Oscar as best documentary in 2015.

In All the Beauty and the Bloodshed, screening in competition at the Venice Film Festival, she draws a thread through the phases of Nan Goldins life as a child instinctively at odds with her frigid suburban family, as the famous chronicler of New Yorks bohemian fringes and the Goldin we see here, the stalwart campaigner leading a chant rejecting the Sackler familys patronage in the foyer of the Guggenheim. Poitras, the most meticulous of researchers in other contexts, doesnt provide a great deal of detail about the opioid crisis or the Sacklers part in it. The fight and Goldins fight in particular, as a surviving addict who has survived so many things in life is the thing.

Goldin is a household name, at least in households with a passing interest in art. Her photographs of sexual and social outsiders are vivid and heartfelt, a nether world of sequins, sex, drugs, dissipation and genuine joy. For the people who see them in galleries, Goldin observes, they look like cinema stills. Because most people think they are characters. But for the people being photographed, its just them.

Less known was Goldins own story which, as she tells it here, was grounded in a childhood of arid affluence. Her older sister Barbara cared for her, giving her the hugs, love and stories that were beyond her prim mother, until she was diagnosed as mentally ill and, in her early teens, sent to an orphanage. A couple of years later, she committed suicide. Barbara was a rebel at heart, says Goldin. She just didnt have the power to go into full-blown rebellion, the way I did.

Nobody was supposed to speak about it. Nobody was supposed to discuss anything that didnt sound respectable. For an entire year, the child who became Nan Goldin didnt speak at all. Her parents placed her in foster care; talking to Poitras, she suddenly remembers being physically sick with fear. Fortunately, she wound up in a progressive school the only one that would have her, after many expulsions where she was provided with a camera. It was the only voice I had. It also gave her a passage out.

Everyone involved in PAIN, the Oxycontin survivors campaign for redress, knows how crucial it is that one of the most recognized names in the contemporary art world is seen to be at the forefront of a battle within that world. It isnt the war, which they would feel had been won if the Sacklers were in jail, but winning the battle is certainly something. One by one, the museums they target announce they wont be taking the Sacklers tainted money any more. Their name starts to be removed from gallery walls. It may be a victory of largely symbolic value, but patronage in itself is symbolic, a way to make dirty money seem clean.

Just as crucial as Goldins place in that world, however, is her willingness to make headlines by talking and writing about her own addiction, describing the abjection of a life built around scoring and using without pulling her punches. What All the Beauty and the Bloodshed makes clear is that this is all of a piece with the photographs of drag queens, prostitutes and parties, the angry records of AIDS sufferers, the portraits that show glamour and tenderness where others might see the grotesque.

Poitras never shoots Goldin in a way that lionizes her or gives her the stature of a warrior queen, even though that would be easy enough to do with some emphatic angling and the right lighting. She puts her camera squarely in front of Goldin and shows her at work. In the process, she makes a stupendous work of her own.

Read more:
Venice Review: Laura Poitras All The Beauty And The Bloodshed - Deadline

Roger Waters Brings Stunning Visuals, Fiery Politics to NYC on This Is Not a Drill Tour – Billboard

Before Roger Waters even took the stage Wednesday (Aug. 31) night for the second Madison Square Garden show on his This Is Not a Drill tour, the British rockers genial yet prickly voice issued forth a pre-recorded warning from the speakers: If youre one of those, I like Pink Floyd but I cant stand Rogers politics people, you might do well to fk off to the bar night now, he said with a laugh.

See latest videos, charts and news

See latest videos, charts and news

It was a fair warning, as the 78-year-old legends current tour rescheduled from 2020 due to the pandemic is as heavy on no-holds-barred political commentary as it is on music from the influential psych-rock band that made him famous. And theres certainly no shortage of Floyd songs (which account for more than half of his setlist) over the course of his generous two-act show involving harrowing dystopian visuals, remote-controlled floating animals and a ton of smoke. (Well, the smoke wasnt so much from Waters 140-person crew as it was the gray-headed fans who sparked up the moment he began singing Another Brick in the Wall near the top of the show.)

If there were any shut up and sing types in the audience that night, they were either strangely silent or took his advice about fking off to the bar. The crowds response was either supportive or respectfully neutral to images branding everyone from Ronald Reagan to sitting President Biden a war criminal. There were claps, and even tears, when he ran footage of police officers mercilessly beating unarmed, nonviolent civilians along with the names of murder victims from George Floyd in Minneapolis to journalist Shireen Abu Akleh in Palestine. All of that went down during a driving, funky take on The Powers That Be from his 1987 solo album Radio K.A.O.S., which sounds pretty dated if youre listening to the studio version (the MOR 80s rock production is strong on that one) but boasts a weightier urgency when delivered by his current touring band.

Waters deserves credit for forcing 20,000 nostalgia-seeking Floyd fans to face uncomfortable realities that most concerts serve as an escape from. He reminded everyone that MSG (and all of New York City) sits on land stolen from the Munsee-Lenape people centuries ago. And during a thumping version of Run Like Hell that segued into an acoustic Dj Vu from his most recent solo effort, Is This the Life We Really Want?, he made us Americans confront the bone-chillingly blas footage of U.S. troops gunning down two Reuters journalists in a peaceful public spaceafter they mistook cameras for weapons back in 2007. (After Chelsea Mannings decision to leak the footage caused a reckoning three years later, a spokesperson for U.S. Central Command said, We regret the loss of innocent life, although no one was ever punished for the deaths. A Free Julian Assange message accompanied the footage).

Significantly less laudable, however, are Waters ongoing comments on the war in Ukraine, which he doubled down on Wednesday night. Fresh off a euphoric, laser-laden performance of the entire second side of The Dark Side of the Moon, Waters chided the U.S. and NATO for not ending the war in Ukraine. What we are doing, poking sticks in Russian bears, is completely insane, he offered, while seated at a piano toward the end of the show.

The audience, at that point, was either too hypnotized by the music or stoned off second-hand smoke to do much other than exchange quizzical glances, wondering if theyd heard him correctly; presumably, not everyone in attendance was familiar with Waters recent comments on Russias invasion of Ukraine, where he has faulted both Biden and Ukrainian President Zelenskyy for insufficient negotiations with Russia, which many have seen as a victim-blaming stance, considering that Russia was the invading aggressor. Hes also insisted Russia was pushed into this war, saying, This war is basically about the action and reaction of NATO pushing right up to the Russian border, a line of logic not too dissimilar from someone trying to say Nazi Germany was pushed to invade Poland because of stiff reparations imposed on the country after World War I as if explaining the cause of a hostile invasion somehow frees the invading country of moral responsibility.

The muted response to his cringe-worthy Ukraine comments could also have simply been a result of the crowd giving him a pass, considering how astonishing This Is Not a Drill looks and sounds; its an arresting spectacle complemented by an equally immersive sonic experience that manages to be loud as hell without veering into head-splitting levels that have you reaching for earplugs. And when that inflatable sheep made the rounds above the heads of fans during, naturally, Sheep, the visible delight on everyones face was as life-affirming as the darker imagery was depressing. (Plus, theres a delicious irony in watching hundreds of people reflexively pull out their cell phones the moment a giant sheep appears above their heads.)

All in all, Waters This Is Not a Drill tour serves as a reminder of two important truths: His visual and musical art remains as vital, timely and invigorating as ever, and if someone talks politics at you for two hours, theyre eventually going to say something that does indeed find you wishing you were at the bar instead.

Visit link:
Roger Waters Brings Stunning Visuals, Fiery Politics to NYC on This Is Not a Drill Tour - Billboard

Machine learning at the edge: The AI chip company challenging Nvidia and Qualcomm – VentureBeat

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Todays demand for real-time data analytics at the edge marks the dawn of a new era in machine learning (ML): edge intelligence. That need for time-sensitive data is, in turn, fueling a massive AI chip market, as companies look to provide ML models at the edge that have less latency and more power efficiency.

Conventional edge ML platforms consume a lot of power, limiting the operational efficiency of smart devices, which live on the edge. Thosedevices are also hardware-centric, limiting their computational capability and making them incapable of handling varying AI workloads. They leverage power-inefficient GPU- or CPU-based architectures and are also not optimized for embedded edge applications that have latency requirements.

Even though industry behemoths like Nvidia and Qualcomm offer a wide range of solutions, they mostly use a combination of GPU- or data center-based architectures and scale them to the embedded edge as opposed to creating a purpose-built solution from scratch. Also, most of these solutions are set up for larger customers, making them extremely expensive for smaller companies.

In essence, the $1 trillion global embedded-edge market is reliant on legacy technology that limits the pace of innovation.

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

ML company Sima AI seeks to address these shortcomings with its machine learning-system-on-chip (MLSoC) platform that enables ML deployment and scaling at the edge. The California-based company, founded in 2018, announced today that it has begun shipping the MLSoC platform for customers, with an initial focus of helping solve computer vision challenges in smart vision, robotics, Industry 4.0, drones, autonomous vehicles, healthcare and the government sector.

The platform uses a software-hardware codesign approach that emphasizes software capabilities to create edge-ML solutions that consume minimal power and can handle varying ML workloads.

Built on 16nm technology, the MLSoCs processing system consists of computer vision processors for image pre- and post-processing, coupled with dedicated ML acceleration and high-performance application processors. Surrounding the real-time intelligent video processing are memory interfaces, communication interfaces, and system management all connected via a network-on-chip (NoC). The MLSoC features low operating power and high ML processing capacity, making it ideal as a standalone edge-based system controller, or to add an ML-offload accelerator for processors, ASICs and other devices.

The software-first approach includes carefully-defined intermediate representations (including the TVM Relay IR), along with novel compiler-optimization techniques. This software architecture enables Sima AI to support a wide range of frameworks (e.g., TensorFlow, PyTorch, ONNX, etc.) and compile over 120+ networks.

Many ML startups are focused on building only pure ML accelerators and not an SoC that has a computer-vision processor, applications processors, CODECs, and external memory interfaces that enable the MLSoC to be used as a stand-alone solution not needing to connect to a host processor. Other solutions usually lack network flexibility, performance per watt, and push-button efficiency all of which are required to make ML effortless for the embedded edge.

Sima AIs MLSoC platform differs from other existing solutions as it solves all these areas at the same time with its software-first approach.

The MLSoC platform is flexible enough to address any computer vision application, using any framework, model, network, and sensor with any resolution. Our ML compiler leverages the open-source Tensor Virtual Machine (TVM) framework as the front-end, and thus supports the industrys widest range of ML models and ML frameworks for computer vision, Krishna Rangasayee, CEO and founder of Sima AI, told VentureBeat in an email interview.

From a performance point of view, Sima AIs MLSoC platform claims to deliver 10x better performance in key figures of merit such as FPS/W and latency than alternatives.

The companys hardware architecture optimizes data movement and maximizes hardware performance by precisely scheduling all computation and data movement ahead of time, including internal and external memory to minimize wait times.

Sima AI offers APIs to generate highly optimized MLSoC code blocks that are automatically scheduled on the heterogeneous compute subsystems. The company has created a suite of specialized and generalized optimization and scheduling algorithms for the back-end compiler that automatically convert the ML network into highly optimized assembly codes that run on the machine learning-accelerator (MLA) block.

For Rangasayee, the next phase of Sima AIs growth is focused on revenue and scaling their engineering and business teams globally. As things stand, Sima AI has raised $150 million in funding from top-tier VCs such as Fidelity and Dell Technologies Capital. With the goal of transforming the embedded-edge market, the company has also announced partnerships with key industry players like TSMC, Synopsys, Arm, Allegro, GUC and Arteris.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Link:
Machine learning at the edge: The AI chip company challenging Nvidia and Qualcomm - VentureBeat

4 Ways AI, Analytics and Machine Learning Are Improving Customer Service and Support – CMSWire

Many of todays marketing processes are powered by AI and machine learning. Discover how these technologies are shaping the future of customer experience.

By using artificial intelligence (AI) and machine learning (ML) along with analytics, brands are in a much better position to elevate customer service experiences at every touchpoint and create positive emotional connections.

This article will look at the ways that AI and ML are used by brands to improve customer service and support.

AI improves the customer service journey in several ways, including tracking conversations in real-time, providing feedback to service agents and using intelligence to monitor language, speech patterns and psychographic profiles to predict future customer needs.

This functionality can also drastically enhance the effectiveness of customer relationship management (CRM) and customer data platforms (CDP).

CRM platforms, including C2CRM, Salesforce Einstein and Zoho, have integrated AI into their software to provide real-time decisioning, predictive analysis and conversational assistants, all of which help brands more fully understand and engage their customers.

CDPs, such as Amperity, BlueConic, Adobes Real-Time CDP and ActionIQ, have also integrated AI into more traditional capabilities to unify customer data and provide real-time functionality and decisoning. This technology enables brands to gain a deeper understanding of what their customers want, how they feel and what they are most likely to do next.

Related Article: What's Next for Artificial Intelligence in Customer Experience?

Artificial intelligence and machine learning are now used for gathering and analyzing social, historical and behavioral data, which allows brands to gain a much more complete understanding of their customers.

Because AI continuously learns and improves from the data it analyzes, it can anticipate customer behavior. As such, AI- and ML-driven chatbots can provide customers with a more personalized, informed conversation that can easily answer their questions and if not, immediately route them to a live customer service agent.

Bill Schwaab, VP of sales, North America for boost.ai, told CMSWire that ML is used in combination with AI and a number of other deep learning models to support todays virtual customer service agents.

ML on its own may not be sufficient to gain a total understanding of customer requests, but its useful in classifying basic user intent, said Schwaab, who believes that the brightest applications of these technologies in customer service find the balance between AI and human intervention.

Virtual agents are becoming the first line in customer experience in addition to human agents, he explained. Because these virtual agents can resolve service queries quickly and are available outside of normal service hours, human agents can focus on more complex or valuable customer interactions. Round-the-clock availability provides brands with additional time to capture customer input and inform better decision-making.

Swapnil Jain, CEO and co-founder of Observe.AI, said that todays customer service agents no longer have to spend as much time on simpler, transactional interactions, as digital and self-serve options have reduced the volume of those tasks.

"Instead, agents must excel at higher-value, complex behaviors that meaningfully impact CX and revenue," said Jain, adding that brands are harnessing AI and ML to up-level agent skills, which include empathy and active listening. This, in turn, "drives the behavioral changes needed to improve CX performance at speed and scale."

Because customer conversations contain a goldmine of insights for improving agent performance, AI-powered conversation intelligence can help brands with everything from service and support to sales and retention, said Jain. Using advanced interaction analytics, brands can benefit from pinpointing positive and negative CX drivers, advanced tonality-based sentiment and intent analysis and evidence-based agent coaching.

Predictive analytics is the process of using statistics, data mining and modeling to make predictions.

AI can analyze large amounts of data in a very short time, and along with predictive analytics, it can produce real-time, actionable insights that can guide interactions between a customer and a brand. This practice is also referred to as predictive engagement and uses AI to inform a brand when and how to interact with each customer.

Don Kaye, CCO of Exasol, spoke with CMSWire about the ways brands are using predictive analytics as part of their data strategies that link to their overall business objectives.

Weve seen first-hand how businesses use predictive analytics to better inform their organizations decision-making processes to drive powerful customer experiences that result in brand loyalty and earn consumer trust, said Kaye.

As an example, he told CMSWire that banks use supervised learning or regression and classification to calculate the risks of loan defaults or IT departments to detect spam.

With retailers, weve seen them seeking the benefits of deep learning or reinforcement learning, which enables a new level of end-to-end automation, where models become more adaptable and use larger data volumes for increased accuracy, he said.

According to Kaye, businesses with advanced analytics also tend to have agile, open data architectures that promote open access to data, also known as data democratization.

Kaye is a big advocate for AI and ML and believes that the technologies will continue to grow and become routine across all verticals, with the democratization of analytics enabling data professionals to focus on more complex scenarios and making customer experience personalization the norm.

Related Article: What Customer-Centric Predictive Analytics Looks Like

AI-driven sentiment analysis enables brands to obtain actionable insights which facilitate a better understanding of the emotions that customers feel when they encounter pain points or friction along the customer journey as well as how they feel when they have positive, emotionally satisfying experiences.

Julien Salinas, founder and CTO at NLP Cloud, told CMSWire that AI is often used to perform sentiment analysis to automatically detect whether an incoming customer support request is urgent or not. "If the detected sentiment is negative, the ticket is more likely to be addressed quickly by the support team."

Sentiment analysis can automatically detect emotions and opinions by classifying customer text as positive, negative or neutral through the use of AI, natural language processing (NLP) and ML.

Pieter Buteneers, director of engineering in ML and AI at Sinch, said that NLP enables applications to understand, write and speak languages in a manner that is similar to humans.

"It also facilitates a deeper understanding of customer sentiment, he explained. When NLP is incorporated into chatbots and voice bots it permits them to have seemingly human-like language proficiency and adjust their tones during conversations.

When used in conjunction with chatbots, NLP can facilitate human-like conversations based on sentiment. So if a customer is upset, for example, the bot can adjust its tone to diffuse the situation while moving along the conversation, said Buteneers. This would be an intuitive shift for a human, but bots that arent equipped with NLP sentiment analysis could miss the subtle cues of human sentiment in the conversation, and risk damaging the customer relationship."

Buteneers added that breakthroughs in NLP are making an enormous difference in how AI understands input from humans. For example, NLP can be used to perform textual sentiment analysis, which can decipher the polarity of sentiments in text."

Similar to sentiment analysis, AI is also useful for detecting intent. Salinas said that its sometimes difficult to have a quick grasp on a user request, especially when the users message is very long. In that case, AI can automatically extract the main idea from the message so the support agent can act more quickly.

While AI and ML have continued to evolve, and brands have found many ways to use these technologies to improve the customer service experience, the challenges of AI and ML can still be daunting.

Kaye explained that AI models need good data to deliver accurate results, so brands must also focus on quality and governance.

In-memory analytics databases will become the driver of creation, storage and loading features in ML training tools given their analysis capabilities, and ability to scale and deliver optimal time to insight, said Kaye. He added that these tools will benefit from closer integration with the companys data stores, which will enable them to run more effectively on larger data volumes to guarantee greater system scalability.

Iliya Rybchin, partner at Elixirr Consulting, told CMSWire that thanks to ML and the vast amount of data bots are collecting, they are getting better and will continue to improve. The challenge is that they will improve in proportion to the data they receive.

Therefore, if an under-represented minority with a unique dialect is not utilizing a particular service as much as other consumers, the ML will start to discount the aspects of that dialect as outliers vs. common language, said Rybchin.

He explained that the issue is not caused by the technology or programming, but rather, it is the result of the consumer-facing product that is not providing equal access to the bot. The solution is more about bringing more consumers to the product vs. changing how the product is built or designed."

AI and ML have been incorporated into the latest generations of CDP and CRM platforms, and conversational AI-driven bots are assisting service agents and enhancing and improving the customer service experience. Predictive analytics and sentiment analysis, meanwhile, are enabling brands to obtain actionable insights that guide the subsequent interactions between a customer and a brand.

See more here:
4 Ways AI, Analytics and Machine Learning Are Improving Customer Service and Support - CMSWire

What’s the Difference Between Vertical Farming and Machine Learning? – Electronic Design

What youll learn

Sometimes inspiration comes in the oddest ways. I like to watch CBS News Sunday Morning because of the variety of stories they air. Recently, they did one on Vertical Farming - A New Form of Agriculture (see video below).

CBS News Sunday Morning recently did a piece on vertical farming that spawned this article.

For those who didnt watch the video, vertical farming is essentially a method of indoor farming using hydroponics. Hydroponics isnt new; its a subset of hydroculture where crops are grown without soil. Instead, the plants grow in a mineral-enriched water. This can be done in conjunction with sunlight but typically an artificial light source is used.

The approach is useful in areas that dont provide enough light, or at times or in locations where the temperature or conditions outside would not be conducive for growing plants.

Vertical farming is hydroponics taken to the extreme, with stacks upon stacks of trays with plants under an array of lights. These days, the lights typically are LEDs because of their efficiency and the ability to generate the type of light most useful for plant growth. Automation can be used to streamline planting, support, and harvesting.

A building can house a vertical farm anywhere in the world, including in the middle of a city. Though lots of water is required, its recycled, making it more efficient than other forms of agriculture.

Like many technologies, the opportunities are great if you ignore the details. Thats where my usual contrary nature came into play, though, since I followed up my initial interest by looking for limitations or problems related to vertical farming. Of course, I found quite a few and then noticed that many of the general issues applied to another topic I cover a lotmachine learning/artificial intelligence (ML/AI).

If you made it this far, you know how Im looking at the difference between machine learning and vertical farming. They obviously have no relationship in terms of their technology and implementation, but they do have much in common when one looks at the potential problems and solutions related to those technologies.

As electronic system designers and developers, we constantly deal with potential solutions and their tradeoffs. Machine learning is one of those generic categories that has proven useful in many instances. However, one must be wary of the issues underlying those flashy approaches.

Vertical farming, like machine learning, is something one can dabble in. To be successful, though, it helps to have an expert or at least someone who can quickly gain that experience. This tends to be the case with new and renewed technologies in general. I suspect significantly more ML experts are available these days for a number of reasons like the cost of hardware, but the demand remains high.

Vertical farming uses a good bit of computer automation. The choice of plants, fertilizers, and other aspects of hydropic farming are critical to the success of the farm. Then theres the maintenance aspect. ML-based solutions are one way of reducing the expertise or time required by the staff to support the system.

ML programmers and developers also are able to obtain easier-to-use tools, thereby reducing the amount of expertise and training required to take advantage of ML solutions. These tools often incorporate their own ML models, which are different than those being generated.

Hydroponics works well for many plants, but unfortunately for multiple others, thats not the case. For example, crops like microgreens work well. However, a cherry or apple tree often struggles with this treatment.

ML suffers from the same problem in that its not applicable to all computational chores. But, unlike vertical farms, ML applications and solutions are more diverse. The challenge for developers comes down to understanding where ML is and isnt applicable. Trying to force-fit a machine-learning model to handle a particular problem can result in a solution that provides poor results at high cost.

Vertical farms require power for lighting and to move liquid. ML applications tend to do lots of computation and thus require a good deal of power compared to other computational requirements. One big difference between the two is that ML solutions are scalable and hardware tradeoffs can be significant.

For example, ML hardware can improve performance thats orders of magnitude better than software solutions while reducing power requirements. Likewise, even software-only solutions may be efficient enough to do useful work even while using little power, simply because developers have made the ML models work within the limitations of their design. Vertical farms do not have this flexibility.

Large vertical farms do require a major investment, and theyre not cheap to run due to their scale. The same is true for cloud-based ML solutions utilizing the latest in disaggregated cloud-computing centers. Such data centers are leveraging technologies like SmartNIC and smart storage to use ML models closer to communication and storage than was possible in the past.

The big difference with vertical farming versus ML is scalability. Its now practical for multiple ML models to be running in a smartwatch with a dozen sensors. But that doesnt compare to dealing with agriculture that must scale with the rest of the physical world requirements, such as the plants themselves.

Still, these days, ML does require a significant investment with respect to development and developing the experience to adequately apply ML. Software and hardware vendors have been working to lower both the startup and long-term development costs, which has been further augmented by the plethora of free software tools and low-cost hardware thats now generally available.

Cut the power on a vertical farm and things come to a grinding halt rather quickly, although its not like having an airplane lose power at 10,000 feet. Still, plants do need sustenance and light, though theyre accustomed to changes over time. Nonetheless, responding to failures within the system is important to the systems long-term usefulness.

ML applications tend to require electricity to run, but that tends to be true of the entire system. A more subtle problem with ML applications is the source of input, which is typically sensors such as cameras, temperature sensors, etc. Determining whether the input data is accurate can be challenging; in many cases, designers simply assume that this information is accurate. Applications such as self-driving cars often use redundant and alternative inputs to provide a more robust set of inputs.

Vertical-farming technology continues to change and become more refined, but its still maturing. The same is true for machine learning, though the comparison is like something between a penny bank and Fort Knox. There are simply more ML solutions, many of which are very mature with millions of practical applications.

That said, ML technologies and applications are so varied, and the rate of change so large, that keeping up with whats availablelet alone how things work in detailcan be overwhelming.

Vertical farming is benefiting from advances in technology from robotics to sensors to ML. The ability to track plant growth, germination, and detecting pests are just a few tasks that apply across all of agriculture, including vertical farming.

As with many Whats the Difference articles, the comparisons are not necessarily one-to-one, but hopefully you picked up something about ML or vertical farms that was of interest. Many issues dont map well, like problems of pollination for vertical farms. Though the output of vertical farms will likely feed some ML developers, ML is likely to play a more important part in vertical farming given the level of automation possible with sensors, robots, and ML monitoring now available.

Read more from the original source:
What's the Difference Between Vertical Farming and Machine Learning? - Electronic Design

Kauricone: Machine learning tackles the mundane, making our lives easier – IT Brief New Zealand

A New Zealand startup producing its own servers is expanding into the realm of artificial intelligence, creating machine learning solutions that carry out common tasks while relieving people of repetitive, unsatisfying work. Having spotted an opportunity for the development of low-cost, high-efficiency and environmentally sustainable hardware, Kauricone has more recently pivoted in a fascinating direction: creating software that thinks about mundane problems, so we don't have to. These tasks include identifying trash for improved recycling, looking' at items on roads for automated safety, pest identification and in the ultimate alleviation of a notoriously sleep-inducing task counting sheep.

Managing director, founder and tech industry veteran Mike Milne says Kauricone products include application servers, cluster servers and internet of things servers. It was in this latter category that the notion emerged of applying machine learning at the network's edge.

Having already developed low-cost-low power edge hardware, we realised there was a big opportunity for the application of smart computing in some decidedly not-so-enjoyable everyday tasks, relates Milne. After all, we had all the basic building blocks already: the hardware, the programming capability, and with good mobile network coverage, the connectivity.

Situation

Work is just another name for tasks people would rather not do themselves, or that we cannot do for ourselves. And despite living in a fabulously advanced age, there is a persistent reality of all manner of tasks which must be done every day, but which don't require a particularly high level of engagement or even intelligence.

It is these tasks for which machine learning (ML) is quite often a highly promising solution. ML collects and analyses data by applying statistical analysis, and pattern matching, to learn from past experiences. Using the trained data, it provides reliable results, and people can stop doing the boring work, says Milne.

There is in fact more to it than meets the eye (so to speak) when it comes to computer image recognition. That's why Capcha' challenges are often little more than Identify all the images containing traffic lights': because distinguishing objects is hard for bots. ML overcomes the challenge through the training' mentioned by Milne: the computer is shown thousands of images and learns which are hits, and which are misses.

Potentially, there are as many use cases as you have dull but necessary tasks in the world, Milne notes. So far, we've tackled a few. Rocks on roads are dangerous, but monitoring thousands of kilometers of tarmac comes at a cost. Construction waste is extensive, bad for the environment and should be managed better. Sheep are plentiful and not always in the right paddock. And pests put New Zealand's biodiversity at risk.

Solution

Tackling each of these problems, Kauricone started with its own-developed RISC IoT server hardware as the base. Running Ubuntu and programmed with Python or other open-source languages, the servers typically feature 4GB memory and 128GB solid state storage, the solar-powered edge devices consume as little as 3 watts and run indefinitely on a single solar panel. This makes for a reliable, low-cost field-ready' device, says Milne.

The Rocks on Roads project made clear the challenges of simple' image identification, with Kauricone eventually running a training model around the clock for 8 days, gathering 35,000 iterations of rock images, which expanded to 3,000,000 identifiable traits (bear in mind, a human identifies a rock almost instantly, perhaps faster if hurled). With this training, the machine became very good at detecting rocks on the roads.

For a new project involving construction waste, the Kauricone IoT server will maintain a vigilant watch on the types and amounts of waste going into building-site skips. Trained to identify types of waste, the resulting data will be the basis for improving waste management and recycling or redirecting certain items for more responsible disposal.

Counting sheep isn't only a method for accelerating sleep time, it's also an essential task for farmers across New Zealand. That's not all as an ML exercise, it anticipates the potential for smarter stock management, as does the related pest identification test case pursued by Kauricone. The ever-watchful camera and supporting hardware manage several tasks: identifying individual animals, numbering them, and also monitoring grass levels, essential for ovine nourishment. Tested so far on a small flock, this application is ready for scale.

Results

Milne says the small test cases pursued by Kauricone to date are just the beginning and anticipates considerable potential for ML applications across all walks of life. There is literally no end to the number of daily tasks where computer vision and ML can alleviate our workload and contribute to improved efficiency and, ultimately, a better and more sustainable planet, he notes.

The Rocks on Roads project promises improved safety with a lower human' overhead, reducing or eliminating the possibility of human error. Waste management is a multifaceted problem, where the employment of personnel is rendered difficult owing to simple economics (and potentially stultifying work); New Zealand's primary sector is ripe for technologically powered performance improvements which could boost already impressive productivity through automation and improved control; and pest management can help the Department of Conservation and allied parties achieve better results using fewer resources.

It's early days yet, says Milne, But the results from these exploratory projects are promising. With the connectivity of ever-expanding cellular and low-power networks like SIGFOX and LoraWan, the enabling infrastructure is increasingly available even in remote places. And purpose-built low power hardware brings computing right to the edge. Now, it's just a matter of identifying opportunities and creating the applications.

For more information visit Kauricone's website.

Read the rest here:
Kauricone: Machine learning tackles the mundane, making our lives easier - IT Brief New Zealand

5 Ways Data Scientists Can Advance Their Careers – Spiceworks News and Insights

Data and machine learning people join companies with the promise of cutting-edge ML models and technology. But often, they spend 80% of their time cleaning data or dealing with data riddled with missing values and outliers, a frequently changing schema, and massive load times. The gap between expectation and reality can be massive.

Although data scientists might initially be excited to tackle insights and advanced models, that enthusiasm quickly deflates amidst daily schema changes, tables that stop updating, and other surprises that silently break models and dashboards.

While data science applies to a range of roles, from product analytics to putting statistical models in production, one thing is usually true: data scientists and ML engineers often sit at the tail end of the data pipeline. Theyre data consumers, pulling it from data warehouses or S3 or other centralized sources. They analyze data to help make business decisions or use it as training inputs for machine learning models.

In other words, theyre impacted by data quality issues but arent often empowered to travel up the pipeline earlier to fix them. So they write a ton of defensive data preprocessing into their work or move on to a new project.

If this scenario sounds familiar, you dont have to give up or complain that the data engineering upstream is forever broken. Make like a scientist and get experimental. Youre the last step in the pipe and putting models into production, which means youre responsible for the outcome. While this might sound terrifying or unfair, its also a brilliant opportunity to shine and make a big difference in your teams business impact.

Here are five things data scientists and ML analysts get out of defense mode and ensure that even if they didnt create data quality issues, theyd prevent them from impacting the teams that rely on data.

Business executives hesitate to make decisions based on data alone. A KPMG report showed that 60% of companies dont feel very confident in their data, and 49% of leadership teams didnt fully support the internal data and analytics strategy.

Good data scientists and ML engineers can help by increasing data accuracy, then getting it into dashboards that help key decision-makers. In doing so, theyll have a direct positive impact. But manually checking data for quality issues is error-prone and a huge drag on your velocity. It slows you down and makes you less productive.

Using data quality testing (e.g. with dbt tests) and data observability helps to ensure you find out about quality issues before your stakeholders do, winning their trust in you (and the data) over time.

Data quality problems can easily lead to an annoying blame game between data science, data engineering, and software engineering. Who broke the data? And who knew? And who is going to fix it?

But when bad data goes into the world, its everyones fault. Your stakeholders want the data to work so that the business can move forward with an accurate picture.

Good data scientists and ML engineers build accountability for all data pipeline steps with Service Level Agreements. SLAs define data quality in quantifiable terms, assigning responders who should spring into action to fix problems. SLAs help avoids the blame game entirely.

Trust is so fragile, and it erodes quickly when your stakeholders catch mistakes and start blaming. But what about when they dont catch quality issues? Then the model is poor, or bad decisions are made. In either case, the business suffers.

For example, what if you have a single entity logged as Dallas-Fort Worth and DFW in a database? When you test a new feature, everyone in Dallas Fort-Worth is shown as variation A and everyone in DFW is shown variation B. No one catches the discrepancy. You cant conclude users in the Dallas Fort-Worth area your test has been thrown off, and the groups havent been properly randomized.

Clear the path for better experimentation and analysis through a foundation of higher quality data. By using your expertise to boost quality, your data will become more reliable, and your business teams can run meaningful tests. The team can focus on what to test next instead of doubting the results of the tests.

Confidence in the data starts with you; if you dont have a handle on high-quality and reliable data, youll carry that burden into your interactions with the product and your colleagues.

So stake your claim as the point-person for data quality and data ownership. You can have input into defining quality and delegating responsibility for fixing different issues. Remove friction between data science and engineering.

If you can lead the charge to define and boost data quality, youll impact almost every other team within your organization. Your teammates will appreciate the work you do to reduce org-wide headaches.

Incomplete or unreliable data can lead to terabytes of wasted data. That data lives in your warehouse, getting included in queries that incur compute costs. Low-quality data can be a major drag on your infrastructure bill as it gets included in the filtering-out process time and again.

Identifying complex data is one way to immediately create value for your organization, especially for pipelines that see heavy traffic for product analytics and machine learning. Recollect, reprocess, or impute and clean existing values to reduce storage and compute costs.

Keep track of the tables and data you clean up, and the number of queries run on those tables. Its essential to notify your team about how many questions are no longer running on junk data and how many gigs of storage are freed up for better things.

All data professionals, seasoned veterans, and newcomers should be indispensable parts of the organization. You add value by taking ownership of more reliable data. Although tools, algorithms, and analytics techniques are growing more sophisticated, often the input data is not its always unique and business-specific. Even the most sophisticated tools and models cant run well on erroneous data. The impact of data science can be a boon to your entire organization through the above five steps. Everyone wins when you improve the data your teams depend upon.

Which techniques can help data scientists and ML engineers streamline the data management process? Tell us on Facebook, Twitter, and LinkedIn. Wed love to know!

Read more:
5 Ways Data Scientists Can Advance Their Careers - Spiceworks News and Insights

Artificial intelligence and machine learning now integral to smart power solutions – Times of India

They help to improve efficiency and profitability for utilities.

The utilities space is rapidly transforming today. Its shifting from the conventional and a highly-regulated environment to a tech-driven market at a fast clip. Collating data and optimizing manpower is a constant struggle. The smarter optimization of infrastructure has increased monumentally with the outbreak of the pandemic, and also the dependency on technology. There is an urgent need to balance the supply and demand for which Artificial Intelligence (AI) and Machine Learning (ML) can come into play. Data Science, aided by AI and ML, has been leading to several positive developments in the utilities space. Digitalization can increase the profitability of utilities by significant percentages by utilizing smart meters for grids, digital productivity tools and automating back-office processes. According to a study firms can increase their profitability from 20 percent to 30 percent.

Digital measures rewire organizations to do better through a fundamental reboot of how work gets done.

Customer Service and AI

According to a Gartner report, most AI investments by utilities most often go into customer service solutions. Some 86% of the utilities studied used AI in their digital marketing, towards call center support and customer application. This is testimony to the investments in AI and ML that can deliver a high ROI by improving speed and efficiency, thus enhancing customer experience. The AI thats customer-facing is a low-risk investment as customer enquiries are often repetitive such as billing enquiries, payments, new connections etc. AI can deliver tangible results for business on the customer service front.

Automatic Meters for Energy conservation

As manual entry and billing systems are not only time-consuming, but also susceptible to errors and are expensive too. The Automatic Meter Reading (AMR) System has made a breakthrough. The AMR enables large infrastructure set ups to collect data easily and also analyze the cost centers and the opportunities for improving the efficiencies of natural gas, electric, water sectors and more. It offers real-time billing information for budgeting. It has the advantage of being precise compared to manual entry. Additionally, it is able to store data at distribution points within the networks of the utility. This can be easily accessed over a network using devices like the mobile and handhelds. Energy consumption can be tracked to aid conservation and end energy theft.

Predictive Analytics Enable Smart grid options

By leveraging new-age technologies, utilities can benefit immensely. These technologies in the energy sector help in building smart power grids. The energy sector heavily relies on a complex infrastructure that can face multiple issues as a result of maintenance issues, weather conditions, failure of the system or equipment, demand surges and misallocation of resources. Overloading and congestion leads to a lot of energy being wasted. The grids produce a humongous data which help with risk mitigation when properly utilized. With the large volume of data that continuously pass over the grid, it can be challenging to collect and aggregate it. The operators could miss these insights which could lead to malfunction or outages. With the help of the ML algorithms, the insights can be obtained for smooth functioning of the grids. Automated data management can help maintain the data accurately. With the help of predictive analytics, the operators can predict grid failures before the customers are affected and also create greater customer satisfaction and mitigate any financial loss.

Efficient and Sustainable energy consumption

These allow for better allocation of energy for consumption as it would be based on demand and can save resources and help in load management and forecasting. AI can also deal with issues pertaining to vegetation by analyzing operational data or statistics. This can help to proactively deal with wildfires. Thus, it can become a sustainable and efficient system. To overcome issues pertaining to weather-related maintenance, automation helps receive signals and prioritize the areas that need attention to save money and cut down the downtime. To achieve this, the sector adopts ML capabilities as they need to be able to access automation fast and easily.

The construction sector is also a major beneficiary of the solutions. Building codes and architecture are often a humongous challenges that take a long time to meet. But, some solutions help the builders and developers test these applications seamlessly without any system interruptions. By integrating AI and ML in the data management platforms, the developers enable the data-science teams to spend enough time innovating and much less time on maintenance. With the rise in the computational power and accessibility to the Cloud, the deep learning algorithms are able to train faster while their cost is optimized. AI and ML are able to impact different aspects of business. AI can enhance the quality of human jobs by facilitating remote working. They can help in data collection and analysis and also provide actionable inputs. Data analytics platforms can throw light on the areas of inefficiency and help the providers keep costs down.

Though digital transformation might appear intimidating, its opportunities are much more than the cost and risk associated. Gradually, all utilities will undergo digital transformation as it has begun to take roots in the industrial sectors. This AI-led transformation will improve productivity, revenue gains, make networks more reliable and safe, accelerate customer acquisition, and facilitate entry into new areas of business. Globally, the digital utility market is growing at a CAGR of 11.7% for the period of 2019 to 2027. In 2018, the revenue generated globally for the digital utility market was 141.41 Bn and is expected to reach US$ 381.38 Bn by 2027 according to a study by ResearchAndMarkets.com. As the sector evolves, the advantages of AI and ML will come into play and lead to smarter grids, efficient operations and higher customer satisfaction. The companies that are in a position to take advantage of this opportunity will be ready for the future challenges that could emerge in the market.

Views expressed above are the author's own.

END OF ARTICLE

Read more from the original source:
Artificial intelligence and machine learning now integral to smart power solutions - Times of India