Predictive analytics and Machine Learning crucial to energy – Energy Digital

Predictive analytics and Machine Learning (ML) have a critical role to play in energy decarbonisation, according to a new report from data specialists CKDelta.

Pioneering cross-sector change and collaborationcalls for greater collaboration in the utilities sector, to fulfil its climate ambitions.

Managing Director Geoff McGrath said the potential to integrate this data across the value chain means we can re-conceptualise how we think about, and deploy, systems with both embedded and adaptive intelligence to optimise system performance, without compromising the net zero goal.

The utilities sector is at a watershed moment," he said. "The eyes of consumers and regulators are firmly fixed on electricity, water, and gas providers across the UK. Cost, environmental impacts, and consumer satisfaction are changing the way the sector delivers for customers."

Combining insight from water services provider Northumbrian Water Group, the report examines how the utilities sector can address four key challenges facing the industry today including leak reduction, shifting patterns of usage, and the emergence of a new energy economy by deploying open-source, data-driven models.

It comes at a time when electricity, gas, and water companies are coming under increasing regulatory and consumer scrutiny and the sector is driving forwards with ambitious environmental targets. The water sector has committed to delivering net zero emissions by 2030, while the government has committed to decarbonising the electricity grid by 2035.

Highlighting shifting patterns of energy and water usage as a core challenge to achieving these targets, the report states that we need integrated solutions that can accurately accommodate and predict both emerging and static trends.

It identifiespredictive data models developed from Machine Learning with high-frequency data as one such solution, noting that these models could also play a key role in optimising existing systems and networks.

The report goes on to suggest that companies and their investors should rethink their approach to effectively address thechallenges posed by delivering a low carbon future, adopting whole systemsmodels to gain visibility of competing aims across networks. These models empower organisations to holistically assess alternative energy and investment needs against other commercial targets, such as cost reduction.

CKDelta conclude their report with four recommendations, which are designed to foster an environment of collaboration and change, transparency and openness, and deliver on the sectors net zero ambitions.

These recommendations include putting the consumer at the heart of organisational decision making, using integrated data sources at all stages of the value chain, and keeping whole systems models at the forefront when deploying new infrastructure.

Nigel Watson, Chief Innovation Officer at Northumbrian Water Group, said as we near the halfway mark on AMP7 (Asset Management Period), we are now starting to shape and share what our plan will be for AMP8.

"We have already set our own ambitious target to reach net zero by 2027," he said. "What is becoming clear is the need to collaborate on how this is achieved and how we understand and utilise the tools that will deliver on our bold environmental ambitions. The insights offered from open data are ultimately what will help us to drive the systemic responses to these challenges and help enable the transition to net zero in our industry.

Read this article:
Predictive analytics and Machine Learning crucial to energy - Energy Digital

Machine learning helps to validate the species, provenance and cut of meat samples – Beef Central

THE analytical power of machine learning models is helping to validate the species, provenance and cut of meat.

Despite the inclusion of analytical testing within meat production systems around the world, meat fraud still happens. The horsemeat infiltration of the European beef supply chain in 2013 is an example (click here to view earlier story).

Australias most infamous case of substitution took place in 1981, when kangaroo meat was substituted for beef in an export consignment to the US. The events profoundly damaged the industrys reputation at the time, and Australia narrowly avoided losing its beef export license to the US.

The main culprits were charged with forgery, exporting a prohibited product, false trade description, conspiracy, perjury, theft by deception and selling pet meat for human consumption. Several men received up to four and a half years jail. The events were captured in a popular satirical song of the era, Who put the roo in the stew?

A recent 20-year analysis, which examined more than 400 incidences of beef fraud around the world, found that counterfeiting meat species or substituting one cut for another (within the same species) were the most common type of meat fraud.

Prof Louwrens Hoffman

This kind of counterfeiting with its food safety and consumer swindling implications is being tackled by Queensland Alliance for Agricultural and Food Innovation Professors Louwrens Hoffman and Daniel Cozzolino.

With population growth increasing demand for food, there is considerable economic gain in adulterating food; swapping premium products for inferior products or species, Professor Hoffman said.

And high-value products, such as meat, are especially susceptible to food fraud.

As QAAFIs chair of meat science, Professor Hoffman is concerned by limitations in the testing technology currently used within meat production systems to detect deliberate fraud or accidental substitution.

Consequently, he has been examining newer technology for its potential to overcome current limitations and says a step-change in testing capability is possible.

The cut, the species and even the provenance of meat down to the region of origin and feedlot can now be rapidly determined using imaging technology that is easy and non-destructive to use, he said.

Professor Hoffman said the needs of industry can be best met by using light-based (spectroscopic) technology to provide data about a meat sample. The analysis of this data is done with advanced machine learning algorithms that QAAFI is helping to develop.

He said that light is especially useful for analytical purposes because of a quirk of physics. Atoms (or more specifically, electrons in atoms) can absorb and emit light. As a result, every atom, molecule and compound in the universe produces a unique spectrum of reflected light. This acts as a signature that can be used to forensically identify any compound.

Prof Hoffman is recommending the use of commercially available devices that emit light in the near-infrared (NIR) range.

Handheld NIRs project light onto a meat sample and collect the reflected light called the signature.

The caveat with this approach is that the spectral signatures that can identity meat cuts, species and provenance have to be decoded beforehand. This is where additional R&D is needed.

Meat is a biochemically complex compound. By necessity, the imaging-based identifiers of meat traits are equally complex and surpass the ability of human senses to detect.

In the past, the problem would have hit an impasse because of this. Instead, Prof Hoffman turned to machine learning algorithms to solve what amounts to an enormous, statistical, jigsaw puzzle.

To develop the analytical software, we matched the spectral signatures of meat products of known species, cut, provenance or other variable of interest, he said. That data is used to train machine learning algorithms to detect what distinguishes the different samples from a complex set of spectral clues.

This training process for machine learning could be expanded in the future as industry needs evolve. This could come to include, for example, insects as they start entering the food and feed supply chains, something that Professors Hoffman and Cozzolino also work on from a food-safety perspective.

Prof Hoffman has also been involved in field testing the handheld NIR technology, including in South Africa where it proved highly effective.

We could rapidly differentiate between South African game species, the muscle type and whether the meat was fresh or frozen, he said.

Accuracies for African species differentiation ranged from 89.8 to 93.2 percent and included ostrich, zebra and springbok game meat. Given that South Africa currently has no game meat quality standards or standardised meat cuts, this kind of technological advance opens up new opportunities to cheaply and effectively provide consumer protection.

Prof Hoffman pointed out that the technology is only as good as the back-end analytical software. That is where industry should collaboratively focus its attention in terms of R&D investment to effectively stamp-out meat fraud, he said.

Once the machine learning models are operational, the system is fast, cheap, reliable and accurate, Prof Hoffman said.

Click the links below for more information

Source: QAAFI

More:
Machine learning helps to validate the species, provenance and cut of meat samples - Beef Central

How Huupe’s Innovative Smart Basketball Hoop uses Machine Learning and Advanced Analytics to Revolutionize How Basketball Enthusiasts Play the Game -…

LONDON, UK / ACCESSWIRE / March 31, 2022 / As the National Basketball Association (NBA) increases its domestic and international reach, the league and the game of basketball, in general, are maturing. Specifically, there is a growing analytics revolution happening across the NBA, college basketball, & international basketball. This revolution is not just contained to the top professional leagues as semi-pro, amateur, & youth basketball coaches are utilizing analytics as well. Over the last decade, general managers and coaches have increasingly relied on new hyper-specific statistics and advanced analytics to make smarter decisions on and off the court.

As basketball professionals become more intelligent and new technology enables coaches to capture hyper-specific statistics that were previously impossible to log, players can improve their game by training smarter not harder. Player Efficiency Rating (PER), True Shooting Percentage (TS%), Usage Percentage (USG%), Offensive & Defensive Win Shares (OWS & DSW), and Offensive & Defensive Box Score Plus/Minus (OBPM & DBPM) are just a few of the new statistical categories changing how we understand the sport. By making use of these new statistics and inventive analytics programs, coaches can help players maximize the outcomes of their training hours and reach their true potential.

Huupe is the newest piece of basketball technology driving the analytics revolution in the sport forward. With a smart screen replacing the traditional backboard, advanced (and fun) training mechanisms, as well as a sleek yet weatherproof design, Huupe is the world's first smart basketball hoop, offering training videos, contests, and video highlights directly from the smart hoop's backboard, Huupe is presenting basketball enthusiasts with a powerful new basketball experience. Further, by utilizing machine learning and computer vision, Huupe's smart basketball hoop offers can track players' statistics while playing and can analyze the data captured.

Co-founders and lifelong friends Paul Anton and Lyth Saeed spent one year prototyping and three years perfecting Huupe's hardware and software with their CTO Dan Hayes, in order to make sure that the smart hoop is a truly revolutionary invention in IoT and smart consumer product technology.

Story continues

After building and breaking more basketball hoops and smart screens than one can count, the Huupe team has created a game-changing smart basketball hoop that utilizes computer vision and machine learning to capture and analyze important statistics needed to help a player's performance. While the Huupe team is extremely proud of the powerful technology behind the smart hoop, they are also proud of their product's friendly & exciting gamified UX, the contests & leaderboards, as well as other various internet-enabled features. Of course, as many people install basketball hoops outside, the Huupe is extremely durable and weatherproof without comprising the aesthetic appeal.

Whether Huupe owners are shooting around, playing games with friends, competing in challenges, or practicing with one of the hundreds of NBA-level training videos, Huupe's smart basketball hoop stores all of the performance statistics. This allows players to track performance data with ease and intelligently analyze this data with the touch of a finer. Huupe's innovative computer vision captures traditional statistics as well as advanced statistics such as swishes, makes, misses, trajectory, shot position, vertical jump, wingspan, and much more.

With their innovative smart hoop, Anton and Saeed are the perfect individuals to help push the analytics revolution within the NBA and for the entire sport of basketball forward. Anton and Saeed are legitimate lifelong fans of the sport with basketball in their blood. Having bonded throughout their childhood over basketball, Anton and Saeed are actually passionate about the game; they are not simply looking to attach themselves to an innovative piece of technology nor are they simply looking for their next entrepreneurial endeavor. The two co-founders are on a mission to help players improve their skills, increase opportunities for people to access world-class training, and help basketball enthusiasts connect with like-minded individuals.

Further, Anton and Saeed are uniquely equipped to achieve these goals. Anton's previous venture, Real Shot, used AR/VR technology, machine learning, and computer vision to create an innovative basketball experience, earning a spot in Deutsche Telekom's hub:raum accelerator; Saeed has an impressive track record helping marketplace and AI technology startups operate and grow their business.

We are excited to see how these passionate and talented co-founders continue to capture the hearts and minds of basketball players and fans around the world with their game-changing smart hoop.

Media Contact:

Name: Saqib MalikCompany: Prestige PerfectionsLocation: London, UKNumber: +447935552527

SOURCE: Prestige Perfections

View source version on accesswire.com: https://www.accesswire.com/695440/How-Huupes-Innovative-Smart-Basketball-Hoop-uses-Machine-Learning-and-Advanced-Analytics-to-Revolutionize-How-Basketball-Enthusiasts-Play-the-Game

Read more from the original source:
How Huupe's Innovative Smart Basketball Hoop uses Machine Learning and Advanced Analytics to Revolutionize How Basketball Enthusiasts Play the Game -...

How can reinforcement learning be applied to transportation? – Analytics India Magazine

Reinforcement Learning (RL), a field of machine learning, is based on the principle of trial and error. In easier words, it learns from its own mistakes and corrects the mistake. The aim is simply to build a strategy to guide the intelligent agent to take action in a sequence that leads to fulfilling some ultimate goal. Autonomous Driving (AD) uses Deep Reinforcement Learning (DRL) to make real-time decisions and strategies, not only in AD but also in the field of sales, management and many others. In this article, we will mainly discuss how RL can be used in transportation for better intelligent solutions. Following would be the topics that will be covered in this article.

Lets understand the working of reinforcement learning first.

Reinforcement Learning (RL) is a decision making and strategy building technique that uses trial and error methodology to do these operations in real-time. Its different from the other two machine learning techniques supervised and unsupervised:

The basic architecture of Reinforcement Learning consists of five key terms

Are you looking for for a complete repository of Python libraries used in data science, check out here.

Sometimes it is complex for RL to make decisions. So, a new technique was developed with the help of neural networks and RL which can handle complex decision making and strategy building known as DRL.

Deep Reinforcement Learning (DRL) is a machine learning technique that applies the learning from the previous task to an upcoming task with the help of neural networks and reinforcement learning. As it is derived from Reinforcement Learning the basic principle would be the same but neural networks have the computational power to solve complex problems.

This powerful AI tool, which combines the power of tackling large, complex problems with a generic and flexible framework for sequential decision-making, makes deep reinforcement learning a powerful tool that has become increasingly popular in autonomous decision-making and operation control. Lets see how a DRL is implemented in controlling a taxi fleet.

In the coming decades, ride-sharing companies such as Uber and Ola may aggressively begin to use shared fleets of electric and self-driving cars that could be drafted to pick up passengers and drop them off at their destinations. As cool as it sounds, more complex would be to implement and one major operational challenge which such systems might encounter, however, is the imbalance of supply and demand. Users travel patterns are asymmetric both spatially and temporally, thus causing the vehicles to be clustered in certain regions at certain times of day, and customer demand may not be satisfied in time.

So, the model has to focus on parameters such as customer demands and travel times to be optimal. The objective of this dispatching system is to provide the optimal vehicle dispatch strategy at the lowest possible operational costs and on the passenger side, there are costs associated with the waiting time experienced by all the passengers. To solve this problem the Actor-Critic Algorithm is implemented.

Actor-critic methods combine the advantages of actor-only (policy function only) and critic-only (value function only) methods. Policy gradient methods are reinforcement learning techniques that rely on optimizing parametrized policies concerning the expected return, which is the long-term cumulative reward by gradient descent. They do not suffer from many of the problems, such as the complexity arising from continuous states and actions.

The general idea of policy gradient is that, by generating samples of sequences of tuples of state(trajectories), action and reward from the environment based on the current policy function, it can collect the rewards associated with different trajectories, on which the model could update the parametrized policy function such that high-reward paths are more likely compared to low-reward paths based on their likelihood.

Policy gradient methods have strong convergence properties, which is naturally inherited from gradient descent methods since the sampled rewards usually have very large variances making the vanilla policy gradient method less efficient to learn. The schematic flow of this algorithm is shown below which explain the plan of action of this algorithm in the model.

Lets see the background process of DRL used for dispatching taxis with the help of a case study.

The objective of this case study is to learn the process by which DRL is dispatching taxis for a particular region. A fully-connected neural network of a total of 8 hidden layers, 4 for each actor function and critical function. And there are 128 units at each hidden layer with a learning rate of 510-5 and 1024 as the trajectory (samples of sequences of tuples of state) batch size for each iteration.

Assume that the travel demand is deterministic for this study, i.e. from day to day there are a fixed number of passengers who need to travel between each pair of zones at a certain time of day. The optimal dispatching strategy is solved based on the formula that consists of the waiting time costs for the passengers and the costs of repositioning empty vehicles; this formulation is known as the integer programming model(IP).

So by tracking the convergence factor of the RL there is a finding that the convergence value was very close to the optimal value calculated by the theoretical method. Now lets allow some stochasticity in the travel demand realization to check the sturdiness of the model. So for travel demand distribution was divided into two parts: weekdays and weekends. On each day one travel demand profile was picked randomly for the network.

The DRL learner has no idea of this setup and it starts learning without any prior information about the network. Implemented the same process as did for the above scenario. As the travel demand is stochastic and unknown, the actor-critic method, which may not give the theoretical optimal, can still provide satisfying results.

In this case, the proposed model-free reinforcement learning method (i.e., actor-critic) is an efficient alternative way to solve for reliable and close-to-optimal solutions.

A deep reinforcement learning approach is explained for the problem of dispatching autonomous vehicles for taxi services. In particular, a policy-value framework with neural networks as approximations for both the policy and value functions are explained in this article.

Excerpt from:
How can reinforcement learning be applied to transportation? - Analytics India Magazine

Link Machine Learning (LML), High Volatility and Rising Thursday: Is it Time to Cash Out? – InvestorsObserver

Link Machine Learning (LML) has been relatively more volatile than the crypto market according to a recent analysis of the market. So far Thursday, the crypto has advanced 42.02% to $0.006831457729.

The Volatility Gauge tracks which means that one day won't define its volatility rank - a trend will.LML's high volatility reading is coupled with a low reading on the Risk/Reward Gauge, meaning that the token has relatively wide price swings and is well protected from price manipulation.

Link Machine Learning price is trading above resistance. With support around $0.00400022790391529 and resistance around $0.0064162672552892. This leaves Link Machine Learning out of range and potentially in a volatile position if the rally burns out.

Subscribe to our daily morning update newsletter and never miss out on the need-to-know market news, movements, and more.

Thank you for signing up! You're all set to receive the Morning Update newsletter

Link:
Link Machine Learning (LML), High Volatility and Rising Thursday: Is it Time to Cash Out? - InvestorsObserver

A Qualitative Thematic Analysis of Addressing the Why: An Artificial Intelligence (AI) in Healthcare Symposium – Cureus

According to a report by Johns Hopkins, medical errors are now the third leading cause of death behind cardiovascular disease and cancer [1]. The study details inefficient processes anddistracted and inconsistent care as causative factors, not bad doctors. Medicine is a profoundly personal profession, especially in primary care. Providers take care of patients from the womb to the tomb and everything in between. A patient has an expectation for their primary care provider to be empathetic and knowledgeable in their craft. Instead, individuals often encounter burned-out providers, overburdened by inefficient documentation within electronic medical records, inefficient processes, and inadequate clinic staffing [2]. The pandemic has highlighted the importance of adequate staff, the mental and physical health of staff, and an efficient process for a health system to meet the growing demands of the public. Often, the realities of a complex system that cannot function at the highest level loom large over the reality of the public in desperate need of the proper care at the right time.

The 2021 update of the Commonwealth Fund, which looks into health outcomes among high-income countries, does not cast a favorable view of the United Stateshealthcare system [3]. The report looked at 71 measures across five areas: access to care, care process, administrative efficiency, equity, and health outcomes [3]. The United States was last overall. The United States came in second on measuring care processes; however, it ranked last on the remaining four measures [3]. This rank is in stark contrast to the number of dollars spent on healthcare in the United States. The United States far outspends the other countries regarding the percentage of gross domestic product on related healthcare dollars [3]. Artificial intelligence (AI) is gaining attention as a disruptor of the status quo in medicine. Great promise and potential lie within AI as a growth agency to improve process efficiency and care within medicine.

AI is considered by many the most recent industrial revolution, detailed in an article in Forbes entitled, "The 4th Industrial Revolution Is Here, Are You Ready?" [4]. AI has revolutionized the way we communicate and interact with the supply chainand has increased efficiency in multiple industries, ultimately increasing profit margins. According to a white paper from Accenture, AI can increase healthcare profits by 55% by the year 2035 [5]. Integration of AI into primary care is part of this growth. Currently, AI is being used and tested in specialties such as radiology, cardiology, and oncology [6,7]. Specialties that are dependent on imaging have seen the rapid acceptance of AI pilot programs due to the ability of AI to synthesize large data sets, evaluate, and accurately diagnose. Some of these AI programs are showing accuracy in diagnosis to the same degree as or better than human physicians. The appropriate application of this technology continues to be researched[6,7].

Healthcare organizations have begun to adopt AI systems and have successfully implemented aspects of this technology into their daily process. However, AI has yet to gain full acceptancethroughout healthcare. AI has the potential to garner mainstream attention; however, it must first gain the trust of patients, providers, and staff while showing viability as a business model within clinics and health systems.

This project looks at the themes garnered from a thematic analysis of an online symposium on AI in medicine. The objectives of the symposium include 1) current trends in AI in medicine; 2) short-term and long-term potential of AI in medicine to address issues such as patient access, patient engagement, and patient safety; and 3) understanding the current barriers to the implementation and utilization of AI in medicine.

In June 2021, five expert speakers convened a web-based symposium to discuss some of the more controversial topics around AI. The industry experts include a data scientist from a university with research around data mining, a senior program engineer from a large electronic medical record company, an executive from a prominent AI healthcare platform, a chief medical information officer with a large local health system, and a fellow from a medical informatics program. The first five 20-minute modules were uploaded to a web-based platform for viewing in advance of the 60-minute moderated roundtable (Zoom, Zoom Video Communications, San Jose, CA, USA), modeling a "flipped classroom" curricular design. The interactive 60-minute moderated roundtable provided an opportunity for participants to engage directly with the presenters, ask questions, and critically analyze the topics in a meaningful way. The panel discussion was transcribed with three authors reviewing the themes (identified here as EK, HP, and JB). An inductive thematic analysis of a semi-structured moderated panel on AI in medicine was performed utilizing an iterative process. The transcription was reviewed multiple times, with codes from each reviewer identified. Common themes from these codes were analyzed and condensed for dissemination and included data privacy and access, process improvement, physicianexperience, value in data, and bias in healthcare and AI.

For the evaluation of themes, a topical literature search was conducted utilizing Google Scholar (Google, Mountain View, CA, USA) with the following queries: AI and data privacy and data access, AI and process improvement, AI and physician's experience, AI and bias in healthcare, and AI and value in data. Articles with a published date of January 2020 to the present were considered (Table 1).

The following themes emerged after reviewing the transcribed data: data privacy and access (N=3, number of times identified); process improvement (N=2); physician experience (N=1); value in data (N=2); and bias in healthcare and AI (N=3) (Table 2).

Data from the symposium were synthesized utilizing an iterative process. The transcription was analyzed, and the section below reflects the synthesis of themes followed by quotes from presenters supporting the themes. The discussion section applies medical literature to each theme for further evaluation.

Large amounts of data exist with electronic medical records (EMRs), smartphones, and mobile devices;how do we utilize technology to synthesize data for process improvement and quality measures while maintaining patient privacy? In the United States, organizations own their data. How do we share data among health systems in a meaningful way while maintaining privacy? Is there a way to compensate patients for the use of their data? Would this incentivize patients to engage in programs that seek data for purposes of research? The appropriate analysis and dissemination of data between organizations in healthcare provide an opportunity for insight to improve care delivery. Nations with a centralized healthcare system can draw on large amounts of data without privacy concerns when sharing between organizations. The United States has a siloed system where each organization owns the data of the patient population. Interoperability is an essential piece in improving data sharing in the US healthcare system. Steps must be taken to ensure data collection, and sharing is done ethically.

One solution discussed by presenters is training in basic algorithms, data governance, and interoperability. Patients and healthcare professionals generally lack an understanding of AI and data management. Tech companies like Google and Amazon have a competitive advantage over health systems concerning data governance and algorithm management. Understanding how data scientists and engineers create and evaluate algorithms is essential for healthcare professionals to engage in data management. Healthcare professionals must engage in conversations around data management if health systems want to be competitive in the health tech market. Training in data management for healthcare professionals, administrators, and patients is an important step to help create and maintain privacy standards and improve data sharing between organizations.

"How are we going to survive and make this transition into sort of a data-centric model, versus having all these silos where, you know, we're very protective of our data, but how do we engage with other organizations, how do we leverage the power of, um, data sharing in a way that maintains privacy?" EK

"AI now relies a lot on EHR data, but we are thinking about smartphones, wearable devices, huge amounts of data that patients are collecting, and patients are not sure they want to share that data. Do they want more transparency on how the data is being used?" AG

"I mean, when we go to a clinician and tell them, hey, here's a bunch of data, I mean, they're going to be interested for about six seconds, because they know that there's power in that, but it's sort of like taking a drowning man in the middle of the Atlantic and handing him a glass of water and saying, "here, this is going to be really good for you." RC

"If we do the work to be able to get information aggregated and accessible, will it actually be useful in a clinical setting? Will it actually improve outcomes?" RC

Clinicians are drowning in data. How does an extensive data set show value in healthcare if a provider does not have the time or ability to analyze the data before them? Auto-summarization can pick out important pieces of a patient's entire chart, including structured and unstructured data, and synthesize them into an easily reviewable document for the provider. AI and machine learning (ML) can improve efficiency in back-office processes in billing, scheduling, and provider documentation. The improved efficiency of the process can decrease scheduling and billing errors leading to improved profit and patientexperience. In addition, the improved efficiency in provider workflow can decrease documentation time, allowing the clinician more time with a patient.

"I think the low-hanging fruit, the problems that are solvable, the ones that are easy to measure, you know, are often financial, did I collect more revenue, can I get more patients seen in a day, can I get better utilization. Those are pretty discrete, right, and we can much more quickly and easily measure that" CF

"I think we have a lot of people in healthcare who just don't want to deal with it and they put their head in the sand and say, 'no, I'm not going to do A.I. because I, no one can explain it,' and I think that that's a mistake, because we're missing an incredible opportunity. That would be like, you know, a hundred plus years ago and somebody says, 'yeah, you know what? I understand what you're talking about with germ theory of disease, but I'm not going to participate in that because I don't believe it. You can't show me a germ, I'm not going to believe it until you can.'" CN

Workflow-augmentation algorithms are being developed, utilizing natural language processing with ambient voice technology that decreases provider documentation time allowing for more time in the room with patients. ML and AI are being utilized to improve a provider's experience with chart documentation and data entry, allowing a provider to spend more time in critical thinking and providing care to patients during office visits and at the hospital's bedside. Increased provider documentation and data entry requirements are linked to an increased percentage of medical error and burnout. With the implementation of algorithms supporting workflow, a health system can improve quality measures of care and physician experience, leading to improved overall patient care.

Knowledge augmentation is another area being explored through AI and ML methods. Companies are developing algorithms to improve quality and decrease error with the ultimate goal to improve outcomes. Chatbots have been developed and are being utilized as patient triage. Algorithms utilize large data sets and deep AI to learn how to read radiographs and allow providers to use diagnosis assist in EMR systems. AI is a tool, and knowledge augmentation is an area with great promise for deep AI algorithms to decrease variability in care with the ultimate goal of improving outcomes. The lack of explanation in how a deep AI algorithm produces a result is an ongoing concern for its use in medicine. Knowledge augmentation continues to be an avenue of research.

". There's this other application, of A.I., around workflow augmentation. Taking things that are really burdensome, but relatively easy, and taking those off of people's plates, making that process a lot easier. And so we've seen that be very successful in other industries. I think it's been arguably even a more successful approach to the application of A.I. in non-healthcare, but healthcare continues to focus on the knowledge augmentation rather than workflow augmentation approach, relatively

So, if you can walk out of the room with your chart note already written, and ninety percent of your interview complete, and ninety-plus percent of your documentation complete you're handing the clinician all of the information that they need in the most actionable, usable format possible. That is certainly an application of AI, to know what questions to ask, to know how to translate that information from the patient-friendly interview, into a chart-ready [provider] note." RC

The panelists discussed the importance of finding the right tool for a particular problem. In process improvement, finding the right tool involves understanding the problem compared to the end goal of success: quality improvement, physician experience, and patient outcome. Variability in healthcare leads to inconsistent care. Care should adhere to evidence-based guidelines and be consistent in quality and delivery.

"I think a lot of us in healthcare are in it, yes to care for patients, but also to do it better, right, and we recognize in process improvement, we need to decrease the variability, right. The variation in care needs to get narrowed so that we can recognize if we are doing something right or wrong first, then we can correct it." CN

According to the 2021 Commonwealth Fund Report, the United States healthcare systems ranked last when compared in 71 measures, when rated against other industrialized nations globally [3]. Technology can add value to healthcare, but each program and algorithm must be strategically applied to the appropriate problem. Each organization must consider the ability to implement, maintain, and monitor AI and ML algorithms. Training must occur for clinical and administrative staff. The application and analysis of data sets are as or more important than the amount of data used. The ability of healthcare staff to understand results from algorithms and apply those results will determine the value of AI in mainstream medicine. RC compared value perceived by a consumer of social media and healthcare, saying, " the interesting thing was it was not about building trust, it was about delivering value. And the truth is, Facebook was exciting and fun, and it let you connect with your friends and gave you a whole different way to be able to experience the Internet and interact, and it delivered on the value that people expected the same way as Google. I mean, I think that that's one of those places where healthcare has arguably failed in the past, and I think that that is arguably one of the roots of why we haven't seen more data sharing on the consumer side of things is because they don't see the value of making that data accessible."

"Look at 23andMe. I mean, people pay a hundred plus dollars for the right to give up their genomic information into an aggregate pool. Why'd they do it? Because they made it entertaining. It wasn't even a direct monetary value, in fact it was an inverse monetary value, you had to pay money for the right to get your information into that pool, and people just flowed in there because they made it engaging and entertaining." RC

"How do you see balancing the massive amounts of data that are out there, that need to be able to use it across some of those silos that have come up? But then, perhaps even more functionally, more importantly, is taking that data and turning it into information. Making it actionable, making it valuable." RC

Bias in healthcare affects care pathways and processes in healthcare and has been an ongoing issue in medicine. AI algorithms highlight the bias already present in the system.An article by Igoe, titled Algorithmic Bias in Healthcare Exacerbates Social Inequities: How to Prevent It, highlights the algorithmic bias in healthcare [8]. The author discusses an example of racial bias in the Framingham risk study. In this study, nearly 80% of respondents were Caucasian. The bias of this study has the potential to affect outcomes when treating a diverse population [8]. In an article by Panchet al. (2019), titled Artificial Intelligence and Algorithmic Bias: Implications for Health Systems, the idea of inherent bias in algorithmic processes is discussed further [9]. The authors define algorithmic bias as the implementation of an algorithm that perpetuates the existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation [9]. Panch et al. goon to describe the reality of this bias, If the world looks a certain way, that will be reflected in the data, either directly or through proxies, and thus in the decisions [9].

Data in healthcare have been biased due to the siloed nature of care the US healthcare system has created. Each organization owns its data, and the sharing of data between various groups is complicated. Populations served by various healthcare organizations may be homogenous, thus potentially creating a bias towardparticular groups of people. Evaluating a group of people to note diagnosis and attempt to risk stratify the population is considered by the presenters as a way to help keep the focus of data on patient outcomes. Bias in data can lead to inconsistent care among different populations, and variability can lead to worse outcomes from a population standpoint.

"I want to maybe challenge everybody to think about it differently. It's actually exposing bias that's already there, right. We are already treating people differently, and we're getting away with it because, once again, they're in small scales that nobody has voiced it or it just hasn't been significant enough to be brought to their attention. But, when an algorithm does it, oh my god, everybody has to stop and turn off the algorithm and never use A.I. again." CN

"Im excited that I think were going to find all kinds of different possibilities to take better care of our patients, but I think, maybe, to go even back to some of the earlier conversation, we need to gain not just the trust of the doctors on some of the algorithms and this, deep learning black box, you know, unexplainable A.I. We need to gain the trust of our patients too, to let us use the data, because insofar as we dont have a kind of, heterogeneous population in our data, were going to have nothing but bias, and its going to exaggerate some of the inconsistent delivery of medicine that we have now. CN

I think what we need to do instead, if we cant explain all of the nodes through deep learning and the machine learning component of it, we need to understand, what does that patient population look like, right? Its more like a nutrition label, it contains this many of this kind of person, this many diabetics, this many hypertensives, this is their age, and apply it to a similar population to get the best results. And then what you have to do, as somebody using that algorithm, is you need to actually apply it to your population and let it run and make sure its giving you similar results before you turn it on and you have it start making recommendations in changing care whatsoever. CN

The term artificial intelligence was first coined in 1956 by John McCarthy. He defined AI as "the science and engineering of making intelligent machines [10]. The field of AI is large and separated into different subfields includingmachine learning (ML), deep AI, and natural language processing (NLP). Machine learning works with established data sets and excels at pattern identification and analysis [11]. Deep AI or deep learning is composed of neural networks and allows the program or machine to learn and make autonomous decisions [11]. Natural language processing allows a machine to listen to a human voice and synthesize and analyze information based on conversations [11]. A detailed timeline of some aspects of the history ofAIin medicine is shown in Figure 1, taken from an article in Gastrointestinal Endoscopy Vol 92, issue 4, titled "History of Artificial Intelligence in Medicine [11]. The aforementioned panel consisted of five experts in their field all related to AI and data dissemination to improve processes and outcomes in healthcare. The symposium was designed as an accredited continuing medical education event. The panelists were asked to provide a 30-minute presentation and participate in an online moderated discussion. The moderated discussion centered around the various problems each participant felt AI is equipped to solve. AI is a tool, and one must figure out the problem before the implementation of the tool. Problems exist in medicine that are both process driven and provider/patient experience driven. AI, as a tool, has great promise to create improved pathways for scheduling, billing, and documentation within healthcare. AI can free up time for a provider to have improved conversation with patients, decrease errors in documentation, and potentially decrease burnout [12]. Currently, companies are using chatbots to assist in triage and funnel patients to the right provider. Some chatbots, such as one Babylon Health developed, even give treatment recommendations for low acuity issues [12]. In medical imaging, AI has been studied and shown to be as effective as specialists and radiologists in reading various image modalities across multiple specialties [6,7].

As highlighted by the panelists, knowledge augmentation is challenging to implement on a large scale in medicine. Great promise to add value to the providers decision-making process can be realized through AI algorithms, but it can also be a dangerous arena for the continuation of biasfound in the sample data population used to create the algorithms as highlighted by the panelists. The discussion and exploration of knowledge augmentation algorithms will continue, and the potential for bias should not be taken lightly. Companies utilize AI in conjunction with EMR systems to improve a providers documentation time, make suggestions, and create notes usingambient voice technology, all within a workflow augmentation paradigm [7,13]. In one article, ambient voice technology with natural language processing may forever alter the way a provider documents, allowing for more time with the patient and less time on chart documentation [13]. Barriers to implementation include upfront cost, training, data sharing, and compatibility with the existing technology [13,14]. The authors also discussed bias and risk to data privacy as potential issues to be aware of [13]. True knowledge augmentation algorithms should be developed without bias and meticulously follow evidence-based diagrams. The algorithms should be explainable to the provider, meaning that the steps taken to reach a diagnosis should be discoverable. As alluded by the participants, if an algorithm makes medical decisions, it should go through board certification.

Workflow augmentation is perhaps a more promising area for AI to provide meaningful assistance to problem areas in billing, scheduling, and nursing workflows and triage. AI performs faster and more efficiently than humans in billing, scheduling, accounting, and management of processes. AI chatbots can improve efficiency in nursing and triage processes in the clinic [15]. Healthcare is a complex system of professionals, departments, and institutions. Utilizing engineer planning paradigms such as Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) and Systems Engineering Initiative for Patient Safety (SEIPS) to evaluate, implement, and monitor AI-enabled systems within the culture of each department the system interacts with provides an opportunity to collect data on how AI has addressed a given process and decreased inefficiencies [16]. Instead of looking just at the tool's effectiveness,Li et al. suggest the importance of understanding the workflows of each individualdepartment and how this will change the utilization of AI [16]. The entire system including workflows altered or vanquished is evaluated as well as the effect of the particular algorithm on quality and efficiency [16]. The delivery science rests on three principles: healthcare is delivered in complex systemsand AI must adapt; AI should be viewed as a tool to be used in part of the broader system and not as the end product; and the problems that AI addresses consist of a complex web of people, process, and technologies [16]. The implementation of AI and ML in medicine should not be about implementing a particular algorithm. Instead, quality healthcare systems are built by finding the best solution available to each particular set of problems [16]. Finding the right tool for the right problem is essential to consider when evaluating the implementation of AI in healthcare. For example, the authors discuss an algorithm built to predict acute kidney injury with high accuracy [16]. This was built under the premise that the algorithm would improve care by decreasingtime to diagnosis. However, when put into practice, the algorithm was a burden to physicians and value to the provider and patient was determined to be unclear [16]. This example highlights the importance of understanding the workflows the algorithm will affect [16]. Building an algorithm for the sole purpose of predicting a task in healthcare is not adequate to improve care. The broad view of all persons, departments, and workflows affected must be taken into account [16].

Data privacy and access is a critical issue and one in which policy and process must be developed to ensure the safe acquisition and transfer of data in the healthcare sector. The exchange of patient data among healthcare institutions, academic research, and industry can declassify sensitive information and allow malicious data breaches [17]. Efforts must be taken to limit the risk of such data breaches. Before using an extensive data set of patient information, understanding issues involving patient consent is essential [17]. Some ethics committees do not require consent for deidentified information, while some prefer to use an opt-out method. This method could lead to bias as only the most engaged patients will be involved [17]. Value is an essential factor to consider when implementing a new product or service. The same is true for healthcare and especially AI in healthcare. For a system or provider to change workflow, the value of the change to the new service must be shown. The user and the consumer must see value in the service and gain trust to utilize the service. The perceived risks to the implementation of AI include fear of new technology, ethical or trust issues, and regulatory concerns [18]. Performance and communication concerns were noted to be the highest predictor of risk belief among survey participants [18]. The panelists discussed the data and the reality of data fatigue for the healthcare provider. The analysis of data should be meaningful for the endpoint of a particular problem. Brault and Saxena discussed the large amounts of data collected with mobile health technology and questioned the validity of the data due to inherent bias [19]. The authors suggest the value of data is not just the amount collected but the appropriate evaluation and analysis of said data [19].

Unconscious bias in healthcare is understood as the attitude and opinion towarda particular person or group that affects perceptions by changing the way the care is provided [20]. Bias is inherent in the way medicine is practiced. The siloed nature of care in the United States creates homogenous pools of patients that data are pulledfrom research and process improvement projects. Algorithms expose bias already embedded in the process due to the particular data set being studied, a theme that emerged in the discussion. Brault and Saxena offer three principles to keep in mind for further research. The first principle is to create a catalog of bias listing the source and ramifications of bias [19]. The second deals with creating standards for the use of AI as a tool in medicine [19]. The third principle the authors discuss concerning the research of AI is to develop an approach to evaluate and analyze the effectiveness of AI to solve a particular problem [19]. The reality of AI and the change to specific processes and workflows in medicine will require training for healthcare professionals in the basic language of AI and algorithms. Training will open up the world of AI to professionals and help to safeguard this growing technology in the complex arena of medicine.

Although approximately 50 people registered for the event, less than half were able to attend. Reasons for this could be online meeting fatigue, time of day, and lack of good reminders. Although the event had technical difficulties initially and the first five minutes was not captured, 52 minutes of discussion was transcribed and analyzed. The symposium drew expert panelists from various aspects of the health technology industry; however, more industry leaders should be included in this discussion, including, but not limited to, experts on interoperability, health policy as it relates to AI and emerging technology, and system strategy and management to evaluate new care models emerging through the use of AI.

The research for the analysis was conducted through Google Scholar. The search could have been expanded to include other search engines to potentially include more articles for study. Words such as machine learning, deep AI, and neural networks were uncovered with the search conducted; however, the search could be completed using these words and others to potentially increase the article pool.

Read the original post:
A Qualitative Thematic Analysis of Addressing the Why: An Artificial Intelligence (AI) in Healthcare Symposium - Cureus

Tenth Circuit Appeals Court Claims Fourth And you may 6th Amendment Legal rights Are Meaningless When Federal Security Is found on The newest Range -…

Tenth Circuit Appeals Court Claims Fourth And you may 6th Amendment Legal rights Are Meaningless When Federal Security Is found on The newest Range

A case within very first violent believe as notified by the the fresh DOJ one research up against him was produced by Section 702 security has just achieved an end. The 10th Circuit Is attractive Court has actually decided theres nothing wrong with the new governments FISA-allowed warrantless security applications.

Brand new ACLU, and that aided portray the us citizen whoever interaction was basically collected and you will intercepted with FISA courtroom instructions, summarizes the outcomes for the decision:

Into the a sharply separated ruling, brand new 10th Routine Judge out-of Appeals now incorrectly stored that warrantless security away from Jamshid Muhtorov an appropriate long lasting resident whose email communications were seemed because of the U.S. government below Area 702 of the Overseas Cleverness Security Work (FISA) is actually legal. The courtroom also influenced the egregious 7-12 months impede prior to Mr. Muhtorovs demonstration didnt break new Quick Demonstration Operate.

Muhtorov, whoever path to stop inside surveillance software established from the Edward Snowden and you can a keen FBI informant pretending getting a violent sympathizer, try arrested . He spent men and women six decades from inside the jail due to the fact a great pre-trial detainee.

The fresh new Is attractive Legal doesnt have an issue with any kind of that it. They says brand new security one directed new foreign organizations Muhtorov communicated with try constitutional since these men and women legal rights are not applied to overseas monitoring needs. Muhtorov, an appropriate Us citizen, are focused just after their correspondence had been by-the-way collected, causing the government intercepting a keen untold number of characters and you will 39,100 circumstances from audio files.

Brand new incidental line of a good All of us man or womans correspondence is also legal, says the Is attractive Courtroom. It states those individuals had been when you look at the basic take a look at, one other stop away from targeted overseas correspondence your authorities requires zero warrant to track down. In the event the 1st step try courtroom, precisely what flowed of it is in addition constitutional.

Are you aware that really a lot of time impede between Muhtorovs arrest so youre able to his trial, the new judge states, in effect, this particular all the could have gone a lot easier in the event that Muhtorov had not engaged in his right to examine the data government entities wished to have fun with against your. One to federal safety precautions required the guy was not in a position to indeed pick most of the data being used is actually for some reason next to the part. The point that government entities had to collect they and you will work with it past the area court courtroom must not be kept against the regulators, the newest legal declares.

The new a lot of time dissent [PDF], written by Court Carlos Lucero, excoriates the majority for almost all the achievement it reached, however, uses a lot of time using the judge to help you activity to have deciding it actually was the defendants blame the us government took way too long to make expected proof.

I focus on just as much as a couple of years off reduce which might be uncontestably owing to the federal government. For just more than 21 months, the us government did not alert Muhtorov of ones engagement regarding 702 research in the event facing your. My personal acquaintances vie this slow down failed to expand the new pretrial several months, because this nearly one or two-12 months decrease is actually encompassed in the half dozen-and-one-half-season impede as a result of breakthrough development. That it ways twice-speak: exactly what the bulk is saying would be the fact all authorities decrease was excusable for the individual impede in the finding production. Whenever i note less than, the governments decrease in the breakthrough development is swept away by the my colleagues within the conclusory words for the prevent you to definitely almost half dozen-and-a-half of many years inside the providing such defendants in order to trial try excused, and so means another type of 6th Amendment level of price.

Continue reading here:
Tenth Circuit Appeals Court Claims Fourth And you may 6th Amendment Legal rights Are Meaningless When Federal Security Is found on The newest Range -...

Chuck Todd: Was It A Mistake To "Overly Censor" Donald Trump? "Deplatforming Him Has Sort Of Protected The Public" -…

NBC's Chuck Todd asked on Thursday's broadcast of his MSNBC program if it was a good idea to ban former President Donald Trump from social media platforms and if it is possible to deescalate the "threat that he really presents."

Warner said, "Clearly we're not silencing him. He has an outlet on some of these far-right-wing media where he's speaking to people that believe this garbage."

"Do you think it's been a mistake to overly censor Donald Trump and the stuff he's been saying?" Todd asked Warner. "You know, earlier this week, he also came out and admitted that, yeah, I don't like NATO much, even now, where you're just sitting here, it hit me like a ton of bricks. NATO has been -- this is the most effective NATO has looked in a decade or more. And to just also trash NATO, a former U.S. president doing it, over literally the last 96 hours, you know, deplatforming him has sort of protected the public, right, allows people to say no one cares about it, because I understand people have compartmentalized him."

"There's this idea that he can be incendiary. But sometimes, this is what happens when you try to censor. If you try to censor even the good stuff and you think it's a good idea, are we inadvertently de-escalating the threat that he really presents?" Todd asked.

Warner's response:

Clearly, we're not silencing him. He has an outlet on some of these far-right wing media where he's speaking to people that believe this garbage, but when the Ukrainian people are literally, as I believe, not just fighting for their freedom, but they're fighting on behalf of democracies across the world, and you have a former president of the United States once again kowtowing to Putin and asking for, you know, political dirt and undermining NATO, I never thought I'd see this behavior. How you regular and deal with that, I wish I had a better answer for you. I don't.

CHUCK TODD: There's a lot of people in this country that are rightly outraged about a former German chancellor playing footsie with Putin. It's outrageous then and now when a former U.S. president does it as well.

Go here to read the rest:

Chuck Todd: Was It A Mistake To "Overly Censor" Donald Trump? "Deplatforming Him Has Sort Of Protected The Public" -...

There is no free-speech right to a university platform – Times Higher Education

Politics is alive again in university campuses, inspiring students to speak up on social issues. This retreat from political apathy is good news for democracy. But it is rather confined.

Students care deeply about gender and race identity and believe that how we speak publicly is a matter of social justice. This focus is a far cry from the protests against the invasion of Iraq, or the introduction of fees in higher education, which took students to the streets a generation ago. Even recent political controversies, such as thejustifiability of strict lockdowns or mandatory vaccination, fail to get students very passionate. Many appear much more concerned with what pronouns people may use and who should be given a university platform. As one of my students told me last week: It is the only area of public life where our voice gets heard.

The emphasis on identity and speech has redrawn the old political maps. Right-wing conservatives used to favour restrictions on speech aimed at upholding public morals and family values. Now, they champion freedom of speech to criticise the deplatforming of far-right figures and so-called cancel culture. Left-wing progressives used to oppose government regulation of speech. Now, they call for government restrictions on hate speech and the public observance of linguistic norms of political correctness. The new free speech wars are toxic and messy.

The Public Order Act 1986 prohibits the expression of hatred on account of certain protected characteristics. But many students feel that even speech within the limits of the law might make them unsafe and increase the risk of rights violations. This argument is not new: it was made by prominent feminists in the 1980s and 1990s, who called for a ban on pornography. Except that the argument is now turned against the feminists who deny that transgender women are women. Kathleen Stock, a prominent gender-critical feminist who has been at the receiving end of student protestsand attempts to deplatform her, invokes free speech and academic freedom in her defence.

Controversies surrounding who has the right to speak at university platforms are not new, either. Universities in modern times have always been places of contestation. Students always claimed the right to disrupt university events as an act of political protest. In most countries, the tension between speech and protest within campuses is handled internally, free from government interference. But in the UK, the government is now proposing a new law, through the Higher Education (Freedom of Speech) Bill, which would give it great powers over universities. It makes academic no platforming an offence and gives legal powers to a regulatorto monitor university practices. The government is seeking to assert control over academic speech and its proper balance against freedom of protest. We must therefore ask who has rights over academic platforms.

It is easy to conflate academic freedom with freedom of speech, as the proposed legislation does. The aim of universities is to pursue knowledge in a scholarly, critical and impartial way. The free exchange of ideas, particularly through publishing, is essential to that aim. But the primary meaning of academic freedom is not to reiterate a right to freedom of speech that everyone has under article 10 of the European Convention on Human Rights (ECHR).

Thereare an infinite number of platforms online, via which any speaker, academic or otherwise, can freely publicise their views. Unlike airwaves, platforms for addressing the public are no longer a scarce public good, to be distributed equally under some conception of social justice. What distinguishes university platforms is that they are credible and influential because they are typically run by experts. But there is no such thing as a free-speech right to a good platform. Jimmy Fallon does not breach my free-speech right by not inviting me on to The Tonight Show. Nor am I silenced by law colleagues who do not invite me to academic conferences on how human rights judges are abusing their power. They are free to have a balanced debate among those who oppose judicial review, and exclude those like me who defend it. You cannot be wronged by being deprived of something to which you had no right.

The point of academic freedom is rather different. It is to assert the independence of the academic community from government, in the pursuit of knowledge. Government has no right to tell academics what to research or teach, what views to defend, or what to publish. Orthodoxies in academic disciplines should rise and fall in a bottom-up way, through peer debate and criticism, not top-down, through government fiat.

This independence of the academic community from government extends to most of its functions, including who should be given or denied an academic platform. Government may not force me to invite home secretary Priti Patel to speak in my human rights seminar series, let alone to defend her policies. Nor can it force me not to rescind an invitation to her if I realise subsequently that her talk will only muddle the debate about immigration and human rights. When the subject-matter is academic, speaker invitations are for me and my colleagues to decide.

I do not mean to suggest that rescinding an invitation to speak is never wrongful. Just like disinviting a guest from ones dinner party, it may break promises made to a speaker, defeat their expectations, or frustrate their plans. But these are neither free speech wrongs nor violations of academic freedom. What violates academic freedom is to take control of university platforms away from academics, as the proposed bill does.

We cannot respect a right to extend invitations without respecting a right to rescind them.

It is tempting to think that the bill does not take control away from academics but, on the contrary, secures it. Deplatforming is often not the choice of the organisers but the result of pressure or disruptive protests by students. We might naturally worry that small groups within universities will acquire a veto over what views academics can express on campus. Or we might worry that this veto always favours one side of the political spectrum, alienating students on the opposite side. But the idea that government should protect academics from their students is not straightforward.

Students have an abstract political right to freedom of protest within campus, including protest that is disruptive. Often, this right prevails over the aim of holding a public debate on some issue. For example, students may legitimately disrupt an event in which a speaker defends slavery, as long as they stay within the limits of the criminal law.

Disruptive protest, too, is a form of speech, falling under article 10 of the ECHR. By making deplatforming an offence, applicable also to student unions, the government will extinguish students right to protest on campus, a right whose exercise has proved pivotal in the past in fighting evil regimes and serious injustices, like apartheid. Justas we would not want government to decide which topics are legitimate for academic research, likewise we should not want government to decide which topics are legitimate grounds for disruptive protests.

Could universities come up with a list of views that all academics agree must never be expressed on campus and insist that any speech outside that list should be protected? The problem is that there is no agreement on what should be on that list. Nor has there been one historically. What is now the dominant view of what counts as extreme or dangerous speech started out as the dissident voice of a small group of student activists. Do we really want to claim that the current majority has the definitive view in history of which forms of speech are out of bounds?

Nor is it a good argument that speakers whose opinions meet some scholarly standards of academic rigour should never be deplatformed. Academics disagree within and across disciplines on what these standards are. It is a noble task to try to show why ones views are not hateful or bigoted, but reasonable and scholarly. But one cannot expect others to agree. University platforms belong to no one and to everyone within the academic community.

A direct consequence of this conception of academic freedom is that mobilised groups of students may frequently succeed in disrupting talks that an academic is scheduled to give outside their own university, on the sole basis that they find their views unacceptable. And this is possible even when the academics views are reasonable and far from hateful or offensive. It could happen to me because of this article.

Such a predicament is difficult and unfortunate. No speaker likes to be heckled, or forced to walk away from a platform. But assuming there is no harassment or other violations of the criminal law, it would hardly be a violation of either my right to free speech or my academic freedom.

Aslong as my job is protected, I can still set the syllabus and the exam questions for my course. I can air my views in countless online platforms. I can publish them in academic journals, including a new title dedicated to controversial ideas. I will most likely receive invitations or job offers from university departments where academics and their students find my ideas agreeable. I will be able to reach whatever audience my views deserve.

It is true that in a worst-case scenario, an academic with controversial views may be ostracised from most university platforms in the country, simply because of a small but vocal minority of protesting students. This would be a sad state of affairs for democracy, but it is the risk that scholars who advocate publicly their views have to take. The alternative is far worse: to allow government to extinguish students right to political protest on campus and to prevent academics from exercising their own judgement as to whether an invitation should be rescinded.

Universities must, of course, condemn proteststhat target individual academics, rather than public events, andthat cross the limits of the law. Harassment is a criminal offence and universities have a duty to protect academics against any unlawful behaviour on campus. This is particularly important when disruptive protests come from students against their own professors since one central element of harassment is persistent conduct that causes distress. It is one thing to be deplatformed or no platformed from public events taking place at other universities. It is a whole different thing to face constantly intimidating protests by ones own students when carrying out day-to-day employment tasks, such as teaching a class or using ones office. But the proposed bill fails to distinguish between visiting speakers and university employees, prohibiting denial of access to platforms in both cases. And no new legislation is needed to ensure that academics are protected in their workplace against harassment.

In reality, the Higher Education Bill has little to do with freedom of speech. It seeks to assert control over academic platforms that, as it happens, will mainly benefit right-wing speech by external speakers. But the very sponsors of this bill would be appalled if the next parliament passed a law that gives government such drastic powers over university speech for a different purpose, such as to mandate the use of gender-neutral pronouns on campus. And whatever arguments they would make against that law also apply against their own bill. The independence of academic speech from government cannot be selective.

If government really cared about academic freedom,it would restrict the scope of the proposed offence to cases where academics lose their jobs or are denied promotion merely because of their ideas, beliefs or views.Itwould specify that university management must not force or put pressure on academics to cancel their courses or change their syllabuses.It would protect academics against any content-based interference by university management, such as monitoring sensitive courses and events or asking academics to share in advance what opinions or views they plan to express.

These are the real threats to academic freedom, and they come from university management. They are materialising already within UK higher education, facilitated by the consumerist culture that has come to dominate universities after the introduction of tuition fees.

The current debate on deplatforming and free speech is a distraction. We must ask: what good is it to academics if access to university platforms is protected by law when the core of their academic freedom has been taken away?

George Letsas is professor of the philosophy of law at UCL.

See more here:

There is no free-speech right to a university platform - Times Higher Education

Will Smith’s Slap Is Political Correctness Taken to Its Logical Conclusion | Opinion – Newsweek

It's been three days since actor Will Smith slapped Chris Rock at the Oscar's after Rock made a joke about Smith's wife, Jada Pinkett Smith, and the hot takes are still coming. The slap set off a fierce debate about whether Smith was violently overreacting to a joke or gallantly standing up for his woman against misogyny and racism. But lost in the conversation has been the precedent for Smith's actions; Smith's slap wasn't the result of some outdated notion of honor culture but something much more mundane: political correctness shutting down comedy.

For though the event shocked the world, the incident in question did not occur in a cultural vacuum. The climate surrounding comedians has been rife with controversy for years now, with comedians frequently the targets of calls for censorship, deplatforming and reputation destruction in the name of social awareness.

Just last year, comedian Dave Chappelle's comedy special "The Closer" dominated the news cycle after a controversy erupted based over his allegedly "anti trans" jokes. The special sparked such outrage among LGBTQ activists that a protest flared up at Netflix's headquarters. Demands were made that the special be removed from the streaming network. Despite Chappelle's insistence that his jokes were in good humor and not hate-filled rants against the trans community, his pleas fell on deaf ears; mainstream media outlets continue to associate Chappelle's image with transphobia to this day.

It's this trend that Will Smith has joinedthe one that sought to cancel Chappelle over jokes. The one that recently set its sights on Joe Rogan, subjected to the collective ire of medical professionals and mainstream media outlets after using his successful platform to deviate from the accepted narrative on COVID-19 vaccinations, mandates and lockdowns.

The Rogan cancelation attempt peaked absurdly when White House press secretary Jen Psaki urged Spotify, the streaming network that hosts Joe Rogan's popular podcast, to take further action against Rogan and help stop the spread of "misinformation." We witnessed the White House, in its official capacity, urging a private company to dampen the voice of a comedian who had committed no crime or violation of the law.

Saturday night's Academy Awards controversy was yet another incarnation of this pernicious trend, but one that escalated to the absurd degree of resulting in physical violence. And while many are decrying the violence on display, the truth is that the incident between Will Smith and Chris Rock was the culmination of a precedent that has been culturally sanctioned by a powerful liberal elite to silence, slander and demonize comedians and commentators who dare to trigger cultural or political sensitivities.

If the federal government is willing to openly demand that private platforms censor comedians, if the mainstream media can benefit from scarring the reputations of those who question their edicts, then why shouldn't any means of silencing, including violence, be the next logical step?

Smith's act would have resulted in anyone less famous or influential being carted off stage by security. And yet, after slapping Chris Rock for making an innocuous joke about his wife's hairstyle (Jada Pinkett Smith has alopecia, something Rock later said he was not aware of), Smith and Pinkett Smith were showered with affirmation from fellow celebrities. Smith went on to win an Oscar that night and then gave a long-winded, tearful speech saturated in racial justice platitudes and appeals to the Black community which further earned the sympathy of those with cultural allegiance to the powerful social justice constituency.

No apology was made that evening to Chris Rock, which sent a worrying message to the entire world after we all witnessed one of Hollywood's most beloved figures assault a comedian over "hurt feelings" and suffer no consequences.

What does this communicate at a time when social critics and commentators are quickly becoming a favored political scapegoat?

It communicates that we are in a culture that has become so narcissistic, so impotent and so humorless that even a slight cultural provocation that deviates from the strict and often absurd rules of "political correctness" can be met with the most inhumane violations of personal rights and freedoms, from federal overreach (Joe Rogan) to physical attacks (Chris Rock).

It's no surprise that in a media and political landscape where conformity, compliance and deference to official narratives are heavily incentivized, growing cultural hostility toward comedians and social critics would also flourish, and this creates an environment dangerous and unstable enough to deter free expression.

It sends the powerful and terrifying message that those who offend, deviate or provoke will not be afforded the same protection, charity and dignity as those who conform.

Nobody should be assaulted for telling a joke. Such a premise should never be culturally sanctioned, normalized or unpunished. Only time will tell if this unfortunate moment in entertainment history will become yet another example of culturally sanctioned intimidation and hostility directed at comedians and commentators, or if something will finally be done to combat this unhealthy cycle.

Angie Speaks is a cultural commentator and cohost of the Low Society Podcast.

The views in this article are the writer's own.

Continue reading here:

Will Smith's Slap Is Political Correctness Taken to Its Logical Conclusion | Opinion - Newsweek