New blood test study uses artificial intelligence to identify cancer. But its not ready for patients yet. – Cancer Research UK – Science Blog

Credit: Vascular Development Laboratory and EM Unit

A blood test that can detect over 50 cancer types is big news this week.

Theres a lot of excitement around the latest research, published in Annals of Oncology. And its easy to see why.

Scientists have used machine learning to help identify if someone has cancer based on tiny bits of tumour DNA floating in their blood. Which could open the door to a blood test that can detect and identify multiple types of cancer.

But its not there yet. And in the blood test buzz, some news articles have missed out crucial details.

The team looked for differences in the DNA shed from cancer cells and healthy cells into the blood.

They focused on differences in a chemical tag that sit on top of DNA in cells, called methyl groups. These groups are usually spread evenly across the DNA in cells, but in cancer cells they tend to cluster at different points. And its this distinction scientists wanted to exploit.

They trained a machine learning algorithm a type of artificial intelligence that pick up patterns and signals to detect differences between methylation patterns in DNA from cancer and non-cancer cells.

The algorithm was trained on 3,052 samples from people with and without cancer from two large databases.

And once the program was fired up and ready to go, the team tested its cancer-spotting ability on a different set of 1,264 samples f, half of which were from people with cancer.

Any test with the goal of being able to detect cancers at their earliest stages in people without symptoms must strike the right balance between picking up cancer (sensitivity) and not giving false positives (specificity). Weve blogged before about what makes a good cancer test, as well as the efforts to develop a cancer blood test.

How do you assess a cancer test?

Researchers look at 3 main things when assessing a new diagnostic test.

Firstly the good news: fewer than 1% of people without cancer were wrongly identified as having the disease. Which is a good sign for the specificity of this test.

And when it came to detecting cancer, across all types of cancer, the test correctly identified the disease in 55% of cases. This is a measure of the tests sensitivity.

But there was a huge variation in sensitivity depending on the type of cancer and how advanced the disease was. The test was better at picking up more advanced cancer, which makes sense more advanced cancers typically shed more DNA into the bloodstream.

If we look at the numbers, across all cancer types the test correctly detected the disease in 93% of those with stage 4 cancer, but only 18% of early, stage 1 cancers.

An important consideration is that the study was only testing if the algorithm could detect cancer in patients who were already known to have cancer. According to the researchers, these figures may change if the test was used on a wider, general population.

Encouragingly for a multicancer test, when the researchers looked at a smaller number of samples to explore if the test helped them identify where the cancer was growing, the algorithm was able to predict the location in 96% of samples, and it was accurate in 93%.

First things first, although the samples numbers are big, they become a lot smaller when you break them down by cancer type and cancer stage. Some cancer types were particularly poorly represented, with only 1 or 2 samples included in the final analysis so theres more work to do there. Based on this, its a bit too soon to say that the test can pick up 50 cancer types.

And if the plan is to use this as a screening tool, then the researchers will need to do more to study people who didnt have symptoms when they were diagnosed. The current study included people who were symptomatic as well as people without symptoms.

And the participant data lacked variation in age, race and ethnicity. Between 83 and 87% of all the samples used to train and test the algorithm were Caucasian.

The big conclusion is that these results are encouraging and should be taken forward into bigger studies. But its important to put the results in context theyre a step in the right direction. There are a lot of steps between this study and a fully-fledged cancer test.

According to the research team, they plan to validate the results using samples from US and UK studies, and well as to begin to examine if the test could be used to screen for cancer. We look forward to seeing the results.

Our head of early detection research, Dr David Crosby, sums it up nicely: Although this test is still at an early stage of development, the initial results are encouraging. And if the test can be fine-tuned to be more efficient at catching cancers in their earliest stages, it could become a tool for early detection.

But more research is needed to improve the tests ability to catch early cancers and we still need to explore how it might work in a real cancer screening scenario.

Lilly

More on this topic

Visit link:
New blood test study uses artificial intelligence to identify cancer. But its not ready for patients yet. - Cancer Research UK - Science Blog

Artificial Intelligence: IDTechEx Research on State-Of-The-Art and Commercialisation Status in Diagnostics and Triage – Yahoo Finance

BOSTON, March 31, 2020 /PRNewswire/ -- Artificial intelligence (AI) is revolutionizing medical diagnostics. The state-of-the-art results have already demonstrated that software can achieve fast and accurate image-based diagnostics on various conditions affecting the skin, eye, ear, lung, breast, and so on. These technological advancements can help automate the diagnosis and triage processes, accelerating the process to speed up the referral process especially in urgent cases, freeing up expert resources, offering the best accuracy everywhere regardless of skill levels, and making the processes more widely available. This is a ground-breaking development with far-reaching consequences. Naturally, many innovators are scrambling to capitalize on these advancements.

The report "Digital Health & Artificial Intelligence 2020: Trends, Opportunities, and Outlook" from emerging technology research firm IDTechEx, has examined this trend. This report considers the trend towards digital and AI applications in health. It outlines the state-of-the-art in AI-based diagnosis of various conditions affecting the skin, eye, heart, breast, brain, lung, blood, genetic disorders and so on. The data sources employed are diverse including dermoscopic images, fundus images, OCT, CT, CTA, echocardiograms, electrocardiogram, mammography, pathology slides, low-res mobile phone pictures and more. This report then identifies and highlights companies seeking to capitalize on these technology advances to automate the diagnostic and triage process.

Furthermore, this report considers the trend of digital health more generally. It provides a detailed overview of the ecosystem and offersinsights into the key trends, opportunities and outlooks for all aspects of digital health, including:Telehealth and telemedicine, Remote patient monitoring, Digital therapeutics / digiceuticals / software as a medical device, Diabetes management, Consumer genetic testing, Smart home as a carer and AI in diagnostics.

Ground-breaking technology

Significant funding is flowing to start-ups and R&D teams of large corporations who develop AI tools to accelerate and/or improve the detection and classification of various diseases based on numerous data sources ranging from RGB images to CT scans, ECG signals, mammograms and to pathological slides. The state-of-the-art results demonstrate that software can do these tasks faster, cheaper, and often more accurately than trained experts and professionals.

This is an important development which, if successful, can have far-reaching consequences: it can make diagnostics much more widely available and it can free up medical experts' time to focus on more complex tasks which currently sit beyond the capabilities of AI-based automation. The technology is today making leaps forward, but technology is only a piece of the puzzle, and many other challenges will need to be overcome before such software tools are widely adopted. However, the direction of travel is clear.

This trend is today on the rise because (a) the availability of digitized medical data sources is rapidly increasing, offering excellent algorithm training feedstock, and (b) advancements in AI algorithms specially trained deep neural networking are enabling software to tackle tasks which it hitherto could not do.

Story continues

The IDTechEx report "Digital Health & Artificial Intelligence 2020: Trends, Opportunities, and Outlook" outlines many such advancements and identifies some of the key companies pursuing each opportunity. The remainder of this article briefly outlines two specific cases: eye disease and skin disease.

Eye Disease

Diabetic retinopathy is a complication that affects the eye. Researchers from India have recently shown that the software accurately interprets retinal fundus photographs to enable a large-scale screening program to detect diabetic retinopathy. The software is trained to make multiple binary classifications, allocating a risk level to each patient. The algorithm was trained and tuned on a total of more than 140k images. The machine matched and exceeded the sensitivity and selectivity level achieved by trained manual experts. The software achieved 92.1% and 95.2% sensitivity and selectivity, respectively.

Naturally, there is a strong business case here, and many are seeking to capitalize on it. One example is IDx, based out of Iowa in the US, who has designed and developed an algorithm to detect diabetic retinopathy. Their AI system achieves a sensitivity and specificity of 87% and 90%, respectively. In as early as 2017, it was tested at 10 sites across the US on 900 patients.

A very insightful test in eye clinics is the OCT (optical coherence tomography), which creates high-resolution (5um) 3D maps of the back of the eye and require expert analysis to interpret. OCT is now one of the most common imaging procedures with 5.35 million OCT scans performed in the US Medicare population in 2014 alone. This creates a backlog in processing and triage, and such delays can be harmful when they cause avoidable treatment delay for urgent cases.

DeepMind (Google) has demonstrated an algorithm that can automate the triage process based on 3D OCT image. Their algorithm design has some unique features. It consists of two stages: (1) a segmentation network and (2) a classification network. The first network will output a labelled tissue segmentation map. Based on the segmented maps, the second network will output a diagnosis probability for over 50 eye-threatening eye conditions and provide referral suggestion. The first part was trained on 877 sparely and manually segmented images and the second network on 14,884 training tissue maps with confirmed diagnosis and referral decision.This database is one of the best curated medical eye databases worldwide.

This two-stage design is beneficial in that when the OCT machine or image definition changes, only the first part will need to be retrained. This will help this algorithm become more universally applicable. In an end-to-end training network, the entire network would need to be retrained.

DeepMind demonstrated that performance of their AI in making a referral recommendation, reaches or exceeds that of experts on a range of sight-threatening retinal diseases. The error rate on referral decision is 5.5%, exceeding or matching specialists even when specialists are given fundus images as well as patient notes in addition to the OCT. Furthermore, the AI beat all retina specialists and optometrists on selectivity and sensitivity measures in referring urgent cases. This is clearly the first step, but an important one that truly opens the door.

Skin disease

Researchers at Heidelberg have already demonstrated that trained deep neural networks, in this case based on Google's Inception v4 CNN architecture, can recognize melanoma based on dermoscopy images. These researchers showed that the software achieves 10 percent more specificity than human clinicians when the sensitivity was set at a level matching human clinicians. The machine can achieve a high 95% sensitivity at a 63.8% specificity.

This is a promising result that shows such diagnostics can be automated. Indeed, multiple companies are automating detection of cancer diseases. One example is SkinVision, from the Netherlands, which seeks to offer a risk rating of skin cancer based on relatively low-quality smartphone images. They trained their algorithm on more than 131k images from 31k users in multiple countries. The risk ranking of the training images were annotated by dermatologists. Studies show that the algorithm can score a 95.1% sensitivity in detecting (pre)malignant conditions with 78.3% specificity. These are good results although the specificity may need to improve as it could unnecessarily alarm some patients.

The business cases are not just limited to cancer detection. Haut.AI is an Estonian company that proposes to use images to track skin dynamics and offer recommendations. One example is that their AI can be a simple and accurate predictor of chronological age using just the anonymized images of eye corners. The networks were trained on 8414 anonymized highresolution images of eye corners labelled with the correct chronological age. For people within the age range of 20 to 80 in a specific population, the machine reaches a mean absolute error of 2.3 years.

There are naturally many more start-ups active in this field. Some firms are focused on health diagnostic whilst others are seeking to use the AI to create tailored skincare regimes and product recommendation. The path to market, and the regulatory barriers, for each target function will naturally be different.

To learn more about this exciting field, please see IDTechEx's report "Digital Health & Artificial Intelligence 2020: Trends, Opportunities, and Outlook" by visitingwww.IDTechEx.com/digitalhealth. This report outlines the state-of-the-art in the use of AI in diagnosing a range of medical conditions. It also identifies and discusses the progress of various companies seeking to commercialize such technological advances. Furthermore, this report considers the trend of digital health more generally. It provides a detailed overview of the ecosystem and offers insights into the key trends, opportunities and outlooks for all aspects of digital health, including: Telehealth and telemedicine, Remote patient monitoring, Digital therapeutics / digiceuticals / software as a medical device, Diabetes management, Consumer genetic testing, Smart home as a carer and AI in diagnostics.

To connect with others on this topic, register for The IDTechEx Show! USA 2020, November 18-19 2020, Santa Clara, USA. Presenting the latest emerging technologies at one event, with six concurrent conferences and a single exhibition covering 3D Printing and 3D Electronics, Electric Vehicles, Energy Storage, Graphene & 2D Materials, Healthcare, Internet of Things, Printed Electronics, Sensors and Wearable Technology. Please visit http://www.IDTechEx.com/USAto find out more.

IDTechEx guides your strategic business decisions through its Research, Consultancy and Event products, helping you profit from emerging technologies. For more information on IDTechEx Research and Consultancy contact research@IDTechEx.com or visit http://www.IDTechEx.com.

Media Contact:

Jessica AbineriMarketing Coordinatorpress@IDTechEx.com +44-(0)-1223-812300

View original content:http://www.prnewswire.com/news-releases/artificial-intelligence-idtechex-research-on-state-of-the-art-and-commercialisation-status-in-diagnostics-and-triage-301032810.html

SOURCE IDTechEx

Here is the original post:
Artificial Intelligence: IDTechEx Research on State-Of-The-Art and Commercialisation Status in Diagnostics and Triage - Yahoo Finance

58m Tempo by DLBA: superyacht optimized with artificial intelligence in every system – Yacht Harbour

DLBA Naval Architects created a new artificial intelligence concept, that makes it possible for some systems to be operated without any human interference.

DLBA has selected a 58m superyacht concept to develop internally as an autonomous yacht. The result, TEMPO, will be a study in all vessel systems where artificial intelligence can be used to enhance an owners experience onboard.

There are three main areas where autonomous technology can be used in the maritime world - navigational autonomy, equipment health monitoring, and mechanical and electrical systems automation. All reduce the need for human input while increasing efficiency.

Navigation autonomy relieves the workload on the vessel operator, and unmanned vessels have been operating in the commercial and military space for years. Hull, mechanical and electrical automation is like having an onboard engineering team at your fingertips. By ensuring elements at the sub-system level are AI-ready, the vessel can be kept operating at peak performance, efficiently.

The number and complexity of auxiliary systems and equipment onboard yachts is increasing year-on-year, and with that comes the increasing demand on crews time to interpret feedback from the systems - equipment health monitoring lessens this demand on crew time.

View post:
58m Tempo by DLBA: superyacht optimized with artificial intelligence in every system - Yacht Harbour

The Limitations of Artificial Intelligence in Businesses – AZoRobotics

Written by AZoRoboticsApr 1 2020

Businesses are often tempted to employ a range of technologies, including artificial intelligence (AI), to enhance performance, reduce labor costs, and improve the bottom linea fact that is logical.

Image Credit: Rensselaer Polytechnic Institute.

However, before opting for automation that can potentially risk the jobs of humans, business owners should carefully assess their operations.

According to Chris Meyer, a professor of practice and the director of undergraduate education at the Lally School of Management at Rensselaer Polytechnic Institute, the same method should not be used when applying AI to each business.

Meyer had studied this topic and has now detailed this in a recent conceptual paperpublishedin an exclusive issue of the Journal of Service Management on AI and Machine Learning in Service Management.

AI has the potential to upend our ideas about what tasks are uniquely suited to humans, but poorly implemented or strategically inappropriate service automation can alienate customers, and that will hurt businesses in the long term.

Chris Meyer, Professor of Practice and Director of Undergraduate Education, The Lally School of Management, Rensselaer Polytechnic Institute

Based on Meyers findings, the option to utilize AI or automation has to be a strategic decision. For example, if a companys business competes by providing an array of service offerings that shift from one client to another, or by offering a considerable amount of human interaction, then its business will experience a lower success rate if human experts are replaced with AI technologies.

Meyer further observed that the reverse is also true: Businesses that restrict customer interaction and choice will witness better success if they decide to automate.

Business leaders planning to migrate to automation should cautiously assess their strategies for handling knowledge resources. Before investing in AI, companies should first understand whether it is a strategically viable option to use algorithms and digital technologies in the place of human interaction and judgment.

The ideas are of use to managers, as they suggest where and how to use automation or human service workers based on ideas that are both sound and practical. Managers need guidance. Like any form of knowledge, AI and all forms of service automation have their place, but managers need good models to know where that place is.

Chris Meyer, Professor of Practice and Director of Undergraduate Education, The Lally School of Management, Rensselaer Polytechnic Institute

Meyer also established that in businesses where reputation and trust are vital factors in fostering and sustaining a client base, individuals will probably be effective than that of automated technologies.

On the other hand, in businesses where human biases are specifically dangerous to the service provision, AI will serve as a comparatively better tool for companies.

Meyer further stressed that several businesses will eventually be utilizing a combination of automation and peoples skills to compete effectively. Even AI, which can manage highly complicated jobs, works optimally alongside humansand the other way round.

Automation and human workers can and should be used together. But the extent of automation must fit with the businesss strategic approach to customers.

Chris Meyer, Professor of Practice and Director of Undergraduate Education, The Lally School of Management, Rensselaer Polytechnic Institute

Source: https://rpi.edu/

Read more:
The Limitations of Artificial Intelligence in Businesses - AZoRobotics

VA Looking to Expand Usage of Artificial Intelligence Data – GovernmentCIO Media

The agency is looking at how to best apply curated data sets to new use cases.

The Department of Veterans Affairs is closer to expanding its use of artificial intelligence and developing novel use cases.

In looking back on the early stages of the VAs newly launched artificial intelligence program, the department's Director of AI Gil Alterovitz noted ongoing questions about how to best leverage AI data sets for secondary uses.

One of the interesting challenges is often that data is collected for maybe one reason, and it may be used for analyzing and finding results for that one particular reason. But there may be other uses for that data as well. So when you get to secondary uses you have to examine a number of challenges, he said at AFCEA's Automation Transformation conference.

Some of the most pressing concerns the VAs AI program hasencountered include questions of how to best apply curated data sets to newfound use cases, as well as how to properly navigate consent of use for proprietary medical data.

Considering the specificity of use cases, particularly for advanced medical diagnostics and predictive analytics, Alterovitz has proposed releasing broader ecosystems of data sets that can be chosen and applied depending on the demands of specific AI projects.

Theres a lot to think about data sets and how they work together. Rather than release one data set, consider releasing an ecosystem of data sets that are related," he said."Imagine, for example, someone is searching for a trial you have information about. Consider the patient looking for the trial, the physician, the demographics, pieces of information about the trial itself, where its located. Having all that put together makes for an efficient use case and allows us to better work together."

Alterovitz also discussed the value of combining structured and unstructured data sets in AI projects, a methodology that Veterans Affairs has found to provide stronger results than using structured data alone.

When you look at unstructured data, there have been a number of studies in health care looking at medical records where if you look at only structured data or only unstructured data individually, you dont get as much of a predictive capability whether it be for diagnostics or prognostics as by combining them, he said.

Beyond refining and expanding these data applications methodologies, the VA also appears attentive to how to best leverage proprietary medical data while protecting personally identifying information.

The solution appears to lie in creating synthetic data sets that mimic the statistical parameters and overall metrics of a given data set while obscuring the particularities of the original data set it was sourced from.

How do you make data available considering privacy and other concerns?" Alterovitz said."One area is synthetic data, essentially looking at the statistics of the underlying data and creating a new data set that has the same statistics, but cant be identified because it generates at the individual level a completely different data set that has similar statistics."

Similarly, creating select variation within a given data set can serve to remove the possibility of identifying the patient source, You can take the data, and then vary that information so that its not the exact same information you received, but is maybe 20% different. This makes it so you can show its statistically not possible to identify that given patient with confidence.

Going forward, the VA appears intent on solving these quandaries so as to best inform expanded AI research.

A lot of the data we have wasnt originally designed for AI. How you make it designed and ready for use in AI is a challenge and one that has a number of different potential avenues, Alterovitz concluded

Read the original:
VA Looking to Expand Usage of Artificial Intelligence Data - GovernmentCIO Media

Hampstead Theatre to show three more plays online for free – Camden New Journal newspapers website

Hampstead Theatre shut a fortnight ago because of coronavirus

THREE productionsfrom Hampstead Theatre are to be screened online forfree.

MikeBartletts Wild a 2016 play inspired by the American whistleblower Edward Snowden can be watchedfrom tonight (Monday) until April 5 through thetheatreswebsite.

Beth Steels Wonderland a witty drama set in the 1984-5 Miners strike will be availablefrom Monday, April 6 10am until 10pm onApril 12.

Howard BrentonsDrawing the Line (2013) about thechaotic partitioning of India in 1947 will be on theweek.

Artistic Director of Hampstead Theatre, Roxana Silbert, said:I hope these productions offer audiences entertainment, connection and nourishment in a time of uncertainty and isolation. These three plays all shine a light on turbulent points in our international history which, along with acknowledging the worst of human behaviour, celebrates the ingenuity, humour, compassion and resilience of the best.

All three productions were originally live streamed from Hampstead Theatre and were available to watch on the Guardians websitefor 72 hours. The plays havebeen made availablewithpermission of theKings Cross media giant.

Hampstead Theatre, which shuton March 16 due to health advice, has alreadyscreened one of its earlier plays through Instagram.

Visitwww.hampsteadtheatre.com and the Guardian website for more details.

The rest is here:
Hampstead Theatre to show three more plays online for free - Camden New Journal newspapers website

Performance artist Brian Feldman returns to Orlando for a ‘social distancing’ version of three shows – Orlando Weekly

As coronavirus has canceled live entertainment worldwide, we've seen countless performers attempting to convert theatrical experiences into digital streams, with varying degrees of success. But if anyone might be able to capitalize creatively on this crazy cultural moment, it could be Orlando Weekly's favorite performance artist, Brian Feldman. After all, this was the guy who sealed himself inside a Skill Crane arcade machine and performed musicals over the telephone long before social distancing was a thing.

Feldman has been quiet for the first quarter of 2020, but he's returning to the virtual stage this April Fools' Day with an online triple feature. At noon, you can watch as Brian Feldman Writes His Last Will & Testament live on Facebook (facebook.com/brianfeldmanprojects), followed at 6 p.m. with a one-shot Social Distancing Dinner edition of The Feldman Dynamic, featuring his parents and sibling sharing a meal over Jitsi Meet. The evening concludes at 7:30 p.m. with the first-ever online-only presentation of #txtshow, Feldman's signature interactive performance piece. You can register free for all three "pay what you can" events at brianfeldman.com. Since we couldn't meet at a vegan restaurant, a Disney theme park or any of our other usual hangouts, Feldman emailed me these thoroughly virus-scanned replies to my questions about being a performance artist in the midst of a pandemic.

Where are you passing your "stay at home" quarantine?

I've been sleeping on the couch and hanging in the living room of Studio 6107 (the family apartment) in Sanford, where [at the time of this interview] there actually is no "stay at home" quarantine.

How have you been spending your time while stuck inside?

You know, save for the lack of daily bike rides, it hasn't been all that much different from when everyone's not at home in quarantine. I've been at the computer somewhat obsessively reading the news, scrolling through Twitter on my phone, texting and WhatsApping friends to check in and see how they're doing, listening to songs to wash your hands to, watching people adapt shows for Facebook and Zoom, falling into YouTube spirals, eating my usual one meal a day yet somehow washing more dishes than anybody else (I am the Dishwasher, after all), forgetting to take a shower some days, arguing with my Dad much more than I should (I'm sorry, Dad), and just trying my best to stay optimistic about the future. There's also a 50-inch flatscreen TV here, which I've turned on a total of one time.

What are some of the notable possessions you'll be including in your Last Will & Testament?

While it's no David Geffen yacht, it is like #48hYardSale, only with all the stuff I just could never part with. There are museum-worthy paintings, items from my childhood, boxes upon boxes of photo prints, negatives and slides; Warhol-esque time capsules and other pop culture artifacts I've probably hung onto for too long. All of my performance archives: project posters, signage (including the portable marquee that I retired after The Most Expensive Gas in America, which still has the gas prices on it), programs, tickets, props (the Orlando Weekly box I was inside of), wardrobe (most notably The Singing Menorah costume and Hannah's wedding dress from Marries Anybody: Part II), handwritten notes, hundreds of buttons and other ephemera. And, of course, The Skill Crane Kid machine. Now that belongs in a museum.

How is your family doing, and do you think social distancing will improve The Feldman Dynamic?

Early articles and reviews written about The Feldman Dynamic really played up the whole "dysfunctional family" angle. But the truth is, there is literally no way you can be a truly dysfunctional family and pull off a live theatrical presentation like this. While there've certainly been moments many moments, actually that none of us have wanted to continue doing this "show" (in quotes, since it's a relative term), it's continued to go on. Now, the show must go online. That stated, we had my Mom on FaceTime for the sixth night of 8 Wards of Chanukah up in D.C. and people told me they hated that.

#txtshow seems tailor-made for our current moment; how do you think performing remotely will impact the experience?

I'd have to agree that the show is more relevant now than ever. But doing the show online is something I've been reluctant to do since immediately following that very first performance at the Kerouac House, when people were already encouraging me to livestream it.

My resistance has always stemmed from feeling that it's vital for the audience to be in the room where it happens, so that everyone can see, hear and react to each other. When something shocking or surprising is said, it's always beneficial to know that someone in that space with you right then and there wrote it, and made the character say it. Making it anonymous via two screens (Twitter and, in this case, Jitsi Meet) ultimately may or may not work. But I guess we'll find that out together!

Any advice for other artists interested in using Jitsi Meet for performances?

Yes. Don't do it! Stick with Zoom and leave Jitsi Meet for me and Edward Snowden.

So, in researching possible video conferencing platforms to utilize for projects during this period of #TheaterAtHome, the main thing I focused on was selecting one that'd be extremely easy to use, extremely free without a time limit, and which offered an assurance that I could hear all audio in a single source from every single person in the room at the same time. You know, like traditional theater.

Zoom, which everyone's using and I almost went with, doesn't always allow everyone to be heard clearly at once, and when on Speaker View it jumps back and forth, which I didn't think would work for The Feldman Dynamic (especially when everyone's talking simultaneously) or #txtshow (when it's really helpful to be able to hear the silence, and not just have everyone on mute). Ultimately, we'll find out if going with one of my best friend's top suggestions (Jitsi Meet) was the best choice when we do it live!

Do you have any upcoming projects or plans?

I was originally scheduled to travel to Goa from May through June to shoot another micro-budget feature with the same team I shot a film in Chennai, India, called Goodbye, White Guy, which has yet to be released. Ideally, if the world ever returns to normal, a notable festival will accept it and audiences will finally get to see what I look like after not shaving, eating that much, changing my clothes or taking a shower for days on end. Oh wait that sounds like the plot to last week.

Depending on how long this thing lasts (answer: September 2021, at least), I might finally stage my long joked-about project, Brian Feldman Reads the Phone Book. Assuming I can find one. Honestly, I have no idea what I'm gonna do next. And Baruch Hashem for that!

As a theme park fan, what are you most looking forward to when the attractions reopen?

Is it too on-the-nose for me to say I'd like to visit Carousel of Progress, sing "the song" and hope that nothing breaks in the process? If it is, since I've unfortunately had to go gluten-free since my last visit to the parks, and since it doesn't look like I'm going to be spending (or making) all that much money for the foreseeable future, perhaps with my stimulus payment check I'll be able to afford one of everything from Erin McKenna's Bakery NYC at Disney Springs? If not that, then Dole Whips for everybody!

More importantly, I'm most looking forward to seeing everyone on my Facebook feed breathe a massive sigh of relief.

How many times did you wash your hands today?

More times than Lady Macbeth.

skubersky@orlandoweekly.com

Go here to see the original:
Performance artist Brian Feldman returns to Orlando for a 'social distancing' version of three shows - Orlando Weekly

The Academy Software Foundation and the Advantages of Open Source Software – VFX Voice

Open source software is used in the creation of every film, and every frame of film is stored digitally in open source software formats this software is critical infrastructure for both making and preserving movies, says Rob Bredow, Sr. Vice President, Executive Creative Director and Head of ILM, and Chair of the ASWF governing board.

The initial investigation included an industry-wide survey, a series of one-on-one interviews with key stakeholders, and three Academy Open Source Summits held at the Academy headquarters, according to Andy Maltz, Managing Director, Science and Technology Council, AMPAS, and ASWF Board Member.

Comments Bredow, They identified the key common challenges they were seeing with open source software. The first was making it easier for engineers to contribute to OSS with a modern software build environment hosted for free in the cloud. The second was supporting users of open source software by helping to reduce the existing version conflicts between various open source software packages. And the third was providing a common legal framework to support open source software.

The mission of the Academy Software Foundation, Bredow elaborates, is to increase the quality and quantity of contributions to the content creation industrys open source software base; to provide a neutral forum to coordinate cross-project efforts; to provide a common build and test infrastructure; and to provide individuals and organizations a clear path to participation in advancing our open source ecosystem.

The ASWF has achieved solid early acceptance, with AWS (Amazon Web Services), Animal Logic, Autodesk, Blue Sky Studios, Cisco, DNEG, DreamWorks, Unreal Engine, Google Cloud, Intel, Microsoft, Movie Labs, Netflix, NVIDIA, Sony Pictures, Walt Disney Studios, Weta Digital, Foundry, Red Hat, Rodeo Visual Effects Company and Warner Bros. already on board.

Read the original here:
The Academy Software Foundation and the Advantages of Open Source Software - VFX Voice

Open Source Code – The Future of User Privacy – Privacy News Online

Will we see more and more open source software in the future, or is this a passing trend that will die off eventually?

According to survey data, open source is definitely here to stay. Right now, around 78% of companies actually run open source software, and that trend will likely continue to grow. Open source code benefits businesses a lot, after all, since they get to enjoy better security, scalability, and much easier deployment as ProPrivacy discusses in their guide: Why is open source important?

But what does that mean for you, the end user? Will you enjoy better privacy? Short answer yes. But if youre looking for more detail, keep reading.

Heres why open source code is the only way to enjoy true privacy, and why you should use an open source VPN client if you want to secure your online data.

Open source code is something thats open to the public. Basically, anyone can inspect, copy, learn from, and sometimes even edit it without fear of legal repercussions. To truly be open source, the software must also have an open source license that meets all the standards of the Open Source Definition.

Nowadays, most developers publish their open source code on GitHub.

Comparatively, closed source code only belongs to the company, team, or person who created it. Nobody else can use or inspect it, unless they want to meet the long arm of the law.

Yes. There are no ifs or buts here.

If you are extremely focused on privacy, open source is the only way to go especially when using a VPN.

Were not saying a closed source VPN client cant be trusted at all. But if youre the kind of person who needs to have full control over their Internet privacy, open source options are simply better for your sanity.

Well, OpenVPN, SoftEther, and WireGuard for starters. OpenVPN is the most popular, but SoftEther and WireGuard are much more lightweight (meaning you get good security and smooth speeds).

But using either of those options isnt as simple as just installing a client on your device. You need a bit of technical know-how to set everything up. Maybe WireGuard might go smoother since its more user-friendly. But youll still have to buy and set up your own server which can cost you anywhere between $15 and $100 per month.

Besides those options, you might see some articles recommending a few other open source solutions. But theyre not too popular or user-friendly, and most of them only run on Linux.

Luckily, at PIA we have also started embracing open source announcing a shift towards open source back in 2018, and recently offering our Android code open for inspection meaning all PIA VPN clients are now open source VPN clients.

Whats more, we have even started reaching out to external auditors. And, also recently launched a closed Beta for the WireGuard protocol.

So at PIA were definitely committed to full transparency and user privacy. If youd like to learn more about the pros of using PIA, check out this in-depth review (dont worry, you can easily scan through it).

The future is open source. The stats prove it, and its really the only way to go when it comes to guaranteeing user privacy and helping people trust brands (especially VPNs).

Why else do you think people should use an open source VPN client? Or do you believe closed source options are better for privacy? Share your thoughts with us in the comments below.

WireGuard is a registered trademark of Jason A. Donenfeld.

See the article here:
Open Source Code - The Future of User Privacy - Privacy News Online

Eclipse Theia 1.0 is an open source alternative to VS Code – JAXenter

Modular, extensible, open source.

The Eclipse Foundation, one of the leading global voices advancing open source software, released Eclipse Theia version 1.0. Intended to be a completely open source alternative to Microsofts Visual Studio Code, Eclipse Theia supports multiple languages and combines some of the best features of IDEs into one extensible platform.

If the name rings any bells, the Theia project previously began elsewhere. It was initially created by Ericsson and TypeFox (founders of Gitpod and Xtext) in 2016 and moved to The Eclipse Foundation in May of 2018.

To celebrate this milestone, explore some of its stand-out features and see what sets it apart from VS Code.

SEE ALSO: Top 6 IDEs of March

Eclipse Theia screenshot. Source.

Theia is designed to run on both the cloud and on desktop, so if you are unsure which you will need, you can use it in both contexts. You can even develop one IDE and run it in browsers and/or desktop versions.

From The Eclipse Foundations press release:

To support both situations with a single source, Theia runs in two separate processes. Those processes are called frontend and backend respectively, and they communicate through JSON-RPC messages over WebSockets or REST APIs over HTTP. In the case of Electron, the backend, as well as the frontend, run locally, while in a remote context the backend would run on a remote host.

SEE ALSO: Get your children programming while playing with these resources on coding for kids

Will Theia eclipse (pardon the pun) VS Code? The fact that it is open source scores some big brownie points. Of course, if you want to contribute towards this project, you can help it improve by submitting bug reports, submitting pull requests, and solving issues.

Eclipse Theia takes many design principles from VS Code, but also stands out as its own project. From the Theia website, the main differences between it and VS Code are its modularity, its desktop and cloud capabilities, and its vendor-neutrality.

Early adopters and contributors of this project include Google Cloud, RedHat, IBM, arm, and Arduino.

Luca Cipriani, CTO of Arduino says:

As one of the worlds largest open source ecosystems for hardware and software, we fully support extending vendor-neutral governance to every aspect of software development. Eclipse Theia is another important step in that direction. Our community has been eagerly advocating for functionality for some time.

View the source code on GitHub. Or, give it a taste and try it out in Gitpod.

Read this article:
Eclipse Theia 1.0 is an open source alternative to VS Code - JAXenter