Daily Archives: June 13, 2021

Integrative Analysis of Genome, 3D Genome, and Transcriptome Alterations of Clinical Lung Cancer Samples – DocWire News

Posted: June 13, 2021 at 12:46 pm

This article was originally published here

Genomics Proteomics Bioinformatics. 2021 Jun 8:S1672-0229(21)00096-6. doi: 10.1016/j.gpb.2020.05.007. Online ahead of print.

ABSTRACT

Genomic studies of cancer cell alterations, such as mutations, copy number variations (CNVs), and translocations, greatly promote our understanding of the genesis and development of cancer. However, the 3D genome architecture of cancers remains less studied due to the complexity of cancer genomes and technical difficulties. To explore the 3D genome structure in clinical lung cancer, we performed Hi-C experiments using paired normal and tumor cells harvested from patients with lung cancer, combining with RNA-seq analysis. We demonstrated the feasibility of studying 3D genome of clinical lung cancer samples with a small number of cells (1 104), compared the genome architecture between clinical samples and cell lines of lung cancer, and identified conserved and changed spatial chromatin structures between normal and cancer samples. We also showed that Hi-C data can be used to infer CNVs and point mutations in cancer. By integrating those different types of cancer alterations, we showed significant associations between CNVs, 3D genome, and gene expression. We propose that 3D genome mediates the effects of cancer genomic alterations on gene expression through altering regulatory chromatin structures. Our study highlights the importance of analyzing 3D genomes of clinical cancer samples in addition to cancer cell lines and provides an integrative genomic analysis pipeline for future larger-scale studies in lung cancer and other cancers.

PMID:34116262 | DOI:10.1016/j.gpb.2020.05.007

More here:
Integrative Analysis of Genome, 3D Genome, and Transcriptome Alterations of Clinical Lung Cancer Samples - DocWire News

Posted in Genome | Comments Off on Integrative Analysis of Genome, 3D Genome, and Transcriptome Alterations of Clinical Lung Cancer Samples – DocWire News

Oxford spinout spies the hidden mechanics of DNA and disease with single-pair resolution method – FierceBiotech

Posted: at 12:46 pm

A spinout from the University of Oxford has found a new way to depict and analyze DNA with super-fine resolution, allowing them to peer into what they describe as the dark matter of the human genome and the molecular basis of many diseases.

Nucleome Therapeutics is working on a method known as micro-capture-C, or MCC, to provide a three-dimensional view of the famously twisting double-helix structure, with the ability to zoom in on individual base pairs.

Previous methods of determining the large-scale 3D genome structure within cells have been unable to resolve it much below 500 to 1,000 base pairs, said co-founder James Davies, who helped develop the technology at Oxfords MRC Weatherall Institute of Molecular Medicine alongside Danuta Jeziorska, who serves as Nucleomes CEO.

Nucleome plans to use its technique to identify the genes at play behind severe COVIDas well as find new drug targets for diseases such as rheumatoid arthritis and multiple sclerosiswith additional reports in the near future. Its latest work on 3D genome mapping was published this week in Nature.

RELATED: Google, Oxford study projects benefits to coronavirus-tracing smartphone apps, even at low levels of adoption

The researchers equate the process with looking at a citys skyline, representing the full strand of DNA within a cell. While before they could only make out the shape of small buildings from a distance, now they can see how its built up from individual brickswith all 6 billion of them representing a single letter of the genetic code.

3D genome analysis is key to understanding the largely untapped dark matter of the genome, Jeziorska said. Better resolution of 3D genome maps improves the accuracy and confidence of linking disease-relevant genetic changes to genes.

This could include the coronavirus pandemic and may help provide a better understanding of why some people require intensive care while others may show no symptoms at all.

RELATED: Oxford, Prenetics to take their COVID-19 rapid testing tech to other infectious diseases

For example, at the moment we know that there is a genetic variant which doubles the risk of being severely affected by COVID-19, Davies said. However, we do not know how the genetic variant makes people more vulnerable to COVID-19.

By providing a more detailed view into DNAs larger structure, drugs aimed at these genetic targets may have a better chance of making it through clinical trials, he added.

In the Nature publication, the researchers report that MCC could spot the physical interactions between gene-regulating proteins and the DNA code itself at base-pair resolutioneven though one targeted string may be controlled by genes located tens of thousands to millions of base pairs further along the chainor maybe a mile away, by bricks in a wall on the other side of the city.

Read more:
Oxford spinout spies the hidden mechanics of DNA and disease with single-pair resolution method - FierceBiotech

Posted in Genome | Comments Off on Oxford spinout spies the hidden mechanics of DNA and disease with single-pair resolution method – FierceBiotech

In Brief This Week: Quantum-Si, TGen, Yale, and More – GenomeWeb

Posted: at 12:46 pm

NEW YORK Quantum-Sisaid this week that it has completed itsplanned mergerwith special purpose acquisition firmHighCapeCapital Acquisition.

The business combination and private placement, approved byHighCape'sstockholders on June 9, provide the company with approximately $534 million in funding, prior to transaction fees, for further development and commercialization of its single-molecule, semiconductor chip-based protein sequencing and genomics technology. This includes approximately $109 millionincash held inHighCape'strust account and $425 million from private placementinvestors, includingForesiteCapital Management, Eldridge, accounts advised by ARK Invest, and Glenview Capital Management. Also, QSi's management team and existing stockholders have rolledall oftheir equity into the combined company.

Following the merger, thecombinedfirm was renamed Quantum-Si. Its Class A common stock and warrants will begin trading on the Nasdaq Global Market on June 11 under the symbols QSI and QSIAW, respectively. Former Quantum-Si stockholders exchanged their shares of capital stock for common stock of the combined company at an exchange ratio of 0.7975. Each share ofHighCapeClass A common stock and Class B common stock became one share of the combined company's Class A common stock.

The Translational Genomics Research Institute said this week that it has received a "substantial grant" from Taiwan Semiconductor Manufacturing to support SARS-CoV-2 variant tracking. Arizona-based TGen will use the funding to perform genomic sequencing of virus samples. The institute is under contract with the Arizona Department of Health Services and the Centers for Disease Control and Prevention to sequence samples from patients who have tested positive for SARS-CoV-2 in Arizona and to monitor for the emergence of mutations and variants. TSMC is building a chip manufacturing plant in north Phoenix that is expected to begin production in 2024.

The US Food and Drug Administration this week reissued a letter granting Emergency Use Authorization to the Yale School of Public Health for itsSalivaDirectSARS-CoV-2 test.

In its letter, the FDA authorized use of the test with additional thermocyclersABIStepOneReal-Time PCR System, ABI Prism 7000 Real-Time PCR System, ABIQuantStudioDx,UbiquitomeLiberty16, Roche Cobas Z480, and Analytik JenaqTower. Based on a post-authorization asymptomatic screening study, the agency also removed a limitation on serial testing for asymptomatic screening.

The FDAhadgranted EUAto Yale fortheSalivaDirecttestin August 2020.

OpGen announced this week that it has submitted an updated 510(k) summary document to the US Food and Drug Administration for its Acuitas AMR Gene Panel for Isolates. The updated document includes the agencys requested updates to documents such as the package insert, electronic user guide, and operator manual. The agency provided feedback on the documents by the end of May and told OpGen it intends to finish its review by August, but that it cant commit to a timeline. OpGen previously submitted its Acuitas panel to the FDAin May 2019,and the agency has twice requested more information.

ProPhase Labs this week announced the formation of two wholly owned subsidiaries, ProPhase Precision Medicine and ProPhase Global Healthcare. The precision medicine subsidiary will focus on genomic testing technologies and will look to acquire existing businesses and technologies or otherwise gain access to technologies used in whole-genome sequencing, the company said. The global healthcare business will expand the companys COVID-19 testing into other countries and will develop additional healthcare-related initiatives, ProPhase added. The company is also developing SARS-CoV-2 antigen and antibody tests to add to its offerings.

Yourgene said this week that it has entered into a license and supply agreement with an unnamed US precision medicine company for an initial term of three years, starting April 1, 2022. The agreement grants the precision medicine company a nonexclusive license to Yourgenes Flex Analysis Software and commits Yourgene to supplying sample preparation reagents and instrumentation to support the precision medicine companys planned launch of a new clinical reproductive health screening service across the US. The deal also allows for automatic annual renewals after the initial term. Financial terms werent disclosed.

India-based Core Diagnostics said this week that it has received accreditation from the College of American Pathologists for a range of tests, including cardiology, oncology, endocrinology, infectious diseases, gynecology, and nephrology. In the accreditation process, CAP inspectors examined the firm's clinical laboratory records and quality control procedures for the past two years. The inspectors also examined Core Diagnostics' laboratory staff qualifications, equipment, facilities, safety program and record, and overall management.

In Brief This Week is a selection of news items that may be of interest to our readers but had not previously appeared onGenomeWeb.

More:
In Brief This Week: Quantum-Si, TGen, Yale, and More - GenomeWeb

Posted in Genome | Comments Off on In Brief This Week: Quantum-Si, TGen, Yale, and More – GenomeWeb

Genome Engineering Market 2021 Highlights, Recent Trends, Market Growth and Business Opportunities till 2028 Sangamo Biosciences, Inc., Integrated…

Posted: at 12:46 pm

The Genome Engineering Market research report provides detailed observation of several aspects, including the rate of growth, regional scope and recent developments by the primary market players. The report offers Porters Five Forces, PESTEL, and market analysis to provide a 360-degree research study on the global Genome Engineering market. The research study discusses about important market strategies, future plans, market share growth, and product portfolios of leading companies. The final report copy provides the impact analysis of novel COVID-19 pandemicon the Genome Engineering market as well as fluctuations during the forecast period.

Top Companies in the global Genome Engineering market areTransposagen Biopharmaceuticals, Inc. (U.S.), Genscript Biotech Corporation (U.S.), New England Biolabs, Inc. (U.S.), Sangamo Biosciences, Inc. (U.S.), Integrated DNA Technologies, Inc (U.S.), Merck KGAA (Germany), Horizon Discovery Group Plc (U.K.), Thermo Fisher Scientific, Inc. (U.S.), Origene Technologies, Inc. (U.S.), Lonza Group Ltd. (Switzerland) and Other.

Click here to get the free sample copy of Genome Engineering market: https://www.marketinsightsreports.com/reports/06022951892/2016-2028-global-genome-engineering-industry-market-research-report-segment-by-player-type-application-marketing-channel-and-region/inquiry?Source=MW&Mode=72

By types market is divided intoCRISPRTALENZFNAntisenseOther Technologies

By applications market is divided intoCell Line EngineeringAnimal Genetic EngineeringPlant Genetic EngineeringOther Applications

Regional Analysis:Asia-Pacific (China, India, Japan, South Korea, Australia, Indonesia, Malaysia, and Others), North America (United States, Canada, and Mexico), Central & South America (Brazil, and Rest of South America), Europe (Germany, France, UK, Italy, Russia, and Rest of Europe), Middle East & Africa (GCC Countries, Turkey, Egypt, South Africa and Other)

(Exclusive Offer: Flat 25% Discount on this report)Browse full Genome Engineering market report description with TOC:https://www.marketinsightsreports.com/reports/06022951892/2016-2028-global-genome-engineering-industry-market-research-report-segment-by-player-type-application-marketing-channel-and-region?Source=MW&Mode=72

The Genome Engineering market report highlights are A comprehensive evaluation of all opportunities and risks in the market. Genome Engineering market current developments and significant occasions. A deep study of business techniques for the development of the market-driving players. Conclusive study about the improvement plot of the market for approaching years. Top to bottom approach of market-express drivers, targets, and major littler scale markets.

Important Features that are under Offering and Key Highlights of the Reports: Potential and niche segments/regions exhibiting promising growth. Detailed overview of Market Changing market dynamics of the industry In-depth market segmentation by Type, Application, etc. Historical, current, and projected market size in terms of volume and value Recent industry trends and developments Competitive landscape of Market Strategies of key players and product offerings

Free customization of the report: This report can be further customized according to the clients specific requirements. No additional charges will be added for limited additional research.

Contact UsIrfan Tamboli (Sales Manager) Market Insights ReportsPhone: + 1704 266 3234 | +91-750-707-8687sales@marketinsightsreports.com | irfan@marketinsightsreports.com

More:
Genome Engineering Market 2021 Highlights, Recent Trends, Market Growth and Business Opportunities till 2028 Sangamo Biosciences, Inc., Integrated...

Posted in Genome | Comments Off on Genome Engineering Market 2021 Highlights, Recent Trends, Market Growth and Business Opportunities till 2028 Sangamo Biosciences, Inc., Integrated…

What Does Big Data Have to Do With Wildlife Conservation? – Gadgets 360

Posted: at 12:46 pm

A species goes extinct when there are none of its kind left. In other words, extinction is about small numbers, so how does big data help us study extinction? Luckily for us, each individual of a species carries with it signatures of its past, information on how connected/ isolated it is today, and other information on what may predict its future, in its genome. The last fifteen years have witnessed a major change in how we can read genomes, and information from genomes of individuals and species can help better plan their conservation.

All life on Earth harbours genetic material. Often called the blueprint of life, this genetic material could be DNA or RNA. We all know what DNA is, but another way to think of DNA is as data. All mammals, for example harbour between 2 to 3.5 billion bits of data in every one of their cells. The entire string of DNA data is called the whole genome. Recent changes in technology allow us to read whole genomes. We read short 151 letter long information bits many, many times, and piece together the whole genome by comparing it to a known reference. This helps us figure out where each of these 151 letter long pieces go in the 3 billion letter long word. Once we have read each position on an average of 10 or 20 times, we can be confident about it. If each genome is sequenced even ten times and only ten individuals are sampled, for mammals each dataset would consist of 200 to 350 billion bits of data!

Over time, the genome changes because of mutation, or spelling errors that creep in. Such spelling errors create variation, or differences between individual genomes in a population (a set of animals or plants). Similarly, large populations with many individuals will hold a variety of spellings or high genetic variation. Since DNA is the genetic blueprint, changes in the environment can also get reflected in these DNA spellings, with individuals with certain words in their genome surviving better than others under certain conditions. Changes in population size often changes the variety of letters observed at a specific location in the genome, or variation at a specific genomic position. Migration or movement of animals into a population adds new letters and variation. Taking all these together, the history of a population can be understood by comparing the DNA sequences of individuals. The challenge lies in the fact that every population faces all of these effects: changes in population size, environmental selection, migration and mutation, all at once, and it is difficult to separate the effects of different factors. Here, the big data comes to the rescue.

Photo Credit: Dr Anubhab Khan

Genomic data has allowed us to understand how a population has been affected by changes in climate, and whether it has the necessary genomic variation to survive in the face of ongoing climate change. Or how specific human activities have impacted a population in the past. We can understand more about the origins of a population. How susceptible is a population to certain infections? Or whether the individuals in a population are related to each other. Some of these large datasets have helped identify if certain populations are identical and should be managed together or separately. All of these questions help in the management and conservation of a population.

We have worked on such big genomic datasets for tigers, and our research has helped us identify which populations of tigers have high genomic variation and are more connected to other populations. We have identified populations that are small and have low genomic variation, but also seem to have mis-spelled or badly spelled words, or a propensity of bad' mutations. We have identified unknown relationships between individuals within populations and have suggested strategies that could allow these isolated populations to recover their genomic variation. It has been amazing to peek into animals lives through these big data approaches, and we hope these types of genomic dataset will contribute to understanding how biodiversity can continue to survive on this Earth.

Uma Ramakrishnan is fascinated by unravelling the mysteries of nature using DNA as tool. Along with her lab colleagues, she has spent the last fifteen years studying endangered species in India.She hopes such understanding will contribute to their conservation. Uma is a professor at the National Centre for Biological Sciences.

Dr. Anubhab Khan is a wildlife genomics expert. He has researching genetics of small isolated populations for past several years and has created and analyzed large scale genome sequencing data of tigers, elephants and small cats among others. He keen about population genetics, wildlife conservation and genome sequencing technologies. He is passionate about ending technology disparity in the world by either making advanced technologies and expertise available or by developing techniques that are affordable and accessible to all.

This series is an initiative by the Nature Conservation Foundation (NCF), under their programme 'Nature Communications' to encourage nature content in all Indian languages. To know more about birds and nature, Join The Flock.

Read the original here:
What Does Big Data Have to Do With Wildlife Conservation? - Gadgets 360

Posted in Genome | Comments Off on What Does Big Data Have to Do With Wildlife Conservation? – Gadgets 360

Cells with synthetic genomes reprogrammed at MRC LMB – and could create new drugs or biodegradable plastics – Cambridge Independent

Posted: at 12:45 pm

A potentially revolutionary step forward in biology which could lead to more reliable drug manufacture, new antibiotics or biodegradable plastics has been achieved at the MRC Laboratory of Molecular Biology in Cambridge.

Two years on from creating the biggest ever synthetic genome, the laboratory of Professor Jason Chin has now reprogrammed cells to make artificial polymers from building blocks not found in nature.

They were able to direct the cells by encoding instructions in their genes and they proved that their synthetic genome also made them entirely resistant to infection by viruses.

The research could lead to the creation of entirely new polymers large molecules made of many repeating units, as seen in proteins, plastics and many drugs.

Prof Chin said: This system allows us to write a gene that encodes the instructions to make polymers out of monomers that dont occur in nature.

These bacteria may be turned into renewable and programmable factories that produce a wide range of new molecules with novel properties, which could have benefits for biotechnology and medicine, including making new drugs, such as new antibiotics.

Wed like to use these bacteria to discover and build long synthetic polymers that fold up into structures and may form new classes of materials and medicines.

We will also investigate applications of this technology to develop novel polymers, such as biodegradable plastics, which could contribute to a circular bioeconomy.

It follows pioneering work, completed in 2019, which enabled the group to construct the entire genome of the bacterium Escherichia coli (E. coli) from scratch.

As the Cambridge Independent reported at the time, they had created a new lifeform that played by different biological rules to any other before it.

And they did so by answering a long-standing question about the way genetic code is read.

DNA is made up of four bases, which are represented by the letters A, T, C and G.

These are read by machinery in cells in threes, such as TCG, and each of these groups is called codon.

To build proteins, each codon tells the cell to add a specific amino acid to a chain via molecules called tRNA. And each codon has a specific tRNA that recognises it and adds the corresponding amino acid. The tRNA that recognises the codon TCG, for example, leads to the amino acid serine.

In all known life, there are 64 codons, or possible combinations, yet only 20 natural amino acids. This means there is redundancy in the system. For example, TCG, TCA, AGC and AGT all code for serine.

Other codons such as TAG and TAA send stop signals to tell a cell when to stop making a protein.

When they synthesised the entire genome of the commonly studied bacteria, E. coli, in 2019, Prof Chins group also simplified its genome, giving it just 61 codons.

Like a giant find and replace exercise, they removed every instance of TCG and TCA and replaced them with the synonyms AGC and AGT, while every instance of the stop codon TAG was replaced by another, TAA.

Their creation continued to synthesise all the normal proteins and the cells containing the synthetic genome thrived.

For the new work, they aimed to use their new techniques to make artificial polymers by exploiting cells natural protein-making processes.

They further modified the bacteria to remove the tRNA molecules that recognise the codons TCG and TCA.

It means that even if there are TCG or TCA codons in the genetic code, the cell no longer has the molecule that can read those codons.

And that is fatal for any virus that tries to infect the cell, as viruses replicate by injecting their genome into a cell and hijacking the cells machinery.

But when the machinery in the modified bacteria tries to read the virus genome, it fails every time it reaches a TCG, TCA or TAG codon.

The researchers infected their bacteria with viruses to test what happened. While the unmodified normal bacteria were killed, the modified bacteria were resistant to infection and survived.

This could be very useful in improving the reliability and cost of drug manufacture.

Medicines such as protein drugs, like insulin, and polysaccharide and protein subunit vaccines, are manufactured by growing bacteria that contain instructions to produce the drug.

If a virus gets into the vats of bacteria used to manufacture certain drugs then it can destroy the whole batch, said Prof Chin. Our modified bacterial cells could overcome this problem by being completely resistant to viruses. Because viruses use the full genetic code, the modified bacteria wont be able to read the viral genes.

Freeing up certain codons also means they are available for use for other purposes, such as coding for synthetic building blocks, called monomers.

The team engineered the bacteria to produce tRNAs coupled with artificial monomers that recognised the newly-available codons TCG and TAG.

Genetic sequences with strings of TCG and TAG codons were inserted into the bacterias DNA and read by the altered tRNAs.

This assembled chains of synthetic monomers in the order defined by the sequence of codons in the DNA.

They were able to programme the cells to string together monomers in different orders by changing the order of TCG and TAG codons in the genetic sequence. And they were able to create polymers composed of different monomers by changing which monomers were coupled to the tRNAs.

Polymers comprising up to eight monomers strung together were created.

The ends of these were joined to make macrocycles, which is a type of molecule that forms the basis of some drugs, including certain antibiotics and cancer drugs.

The synthetic monomers were linked using the same chemical bonds that join amino acids in protein, but the team is also exploring how to expand the range of linkages that could be used in the new polymers.

Dr Megan Dowie, head of molecular and cellular medicine at the Medical Research Council, which funded the study, said: Dr Chins pioneering work into genetic code expansion is a really exciting example of the value of our long-term commitment to discovery science. Research like this, in synthetic and engineering biology, clearly has huge potential for major impact in biopharma and other industrial settings.

The study, published in Science, was funded by the MRC and the European Research Council.

Read more

Worlds first synthetic organism with fully recoded DNA is created at MRC LMB in Cambridge

Lost connections in the brains of mice with Alzheimers restored by MRC LMB in Cambridge

Dr Jan Lwe on the next frontier for MRC Laboratory of Molecular Biology in Cambridge

Bacteria rule the planet - and we need to understand them, says MRC LMB director Dr Jan Lwe

Sign up for our weekly newsletter and stay up to date with Cambridge science

Read the original:
Cells with synthetic genomes reprogrammed at MRC LMB - and could create new drugs or biodegradable plastics - Cambridge Independent

Posted in Genome | Comments Off on Cells with synthetic genomes reprogrammed at MRC LMB – and could create new drugs or biodegradable plastics – Cambridge Independent

People in the News: New Appointments at Invitae, PGDx, Oxford Nanopore, PacBio, More – GenomeWeb

Posted: at 12:45 pm

Invitae: Roxi Wen, Katherine Stueland

Roxi Wen has been appointed as CFO of Invitae, effective June 21. She will replace Shelly Guyer, who will focus on the company's environment, social, and governance efforts. Wen joins Invitae from Mozilla, where she has been CFO. Prior to that, she was CFO at Elo Touch Solutions, and before that, VP of finance at FleetPride. Previously, she was CFO at General Electric Critical Power. Wen holds a bachelor of economics from Xiamen University and an MBA from the University of Minnesota.

Katherine Stueland will step down as chief commercial officer of Invitae, effective June 18, to become CEO at another company.

Personal Genome Diagnostics: Brent Dial

Personal Genome Diagnostics has appointed Brent Dial as its CFO. He previously served as principal at Chordata Ventures. Prior to that, he was CFO of Anheuser-Busch's high end division. He also held positions at JP Morgan Chase, Deutsche Bank Securities, and TCOM. Dial holds an MBA in corporate finance from the University of Pennsylvanias Wharton School and is a graduate of the United States Military Academy.

Oxford Nanopore Technologies: Justin O'Grady

Justin O'Grady has joined Oxford Nanopore Technologies as senior director of translational applications. Previously, he was a senior lecturer in medical microbiology at the University of East Anglia, and before that, a group leader at the Quadram Institute. He holds Ph.D., M.Sc., and B.Sc. degrees in microbiology from the National University of Ireland, Galway.

Pacific Biosciences: Neil Ward

Pacific Biosciences has appointed Neil Ward as VP and general manager for Europe, the Middle East, and Africa.

Ward comes to PacBio from Illumina, where he was senior sales director for Northern Europe. Prior to his 13 years at Illumina, he held bioinformatics and sales roles at Agilent Technologies, Silicon Genetics, Oxford Biomedica, and Celltech. Ward holds a master's degree in bioinformatics from the University of Manchester.

For additional recent items on executive appointments and promotions in omics and molecular diagnostics, please see the People in the News page on our website.

Read the original:
People in the News: New Appointments at Invitae, PGDx, Oxford Nanopore, PacBio, More - GenomeWeb

Posted in Genome | Comments Off on People in the News: New Appointments at Invitae, PGDx, Oxford Nanopore, PacBio, More – GenomeWeb

What is Artificial Intelligence (AI)? | IBM

Posted: at 12:44 pm

Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.

While a number of definitions of artificial intelligence (AI) have surfaced over the last few decades, John McCarthy offers the following definition in this 2004 paper(PDF, 106 KB) (link resides outside IBM), " It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

However, decades before this definition, the birth of the artificial intelligence conversation was denoted by Alan Turing's seminal work, "Computing Machinery and Intelligence" (PDF, 89.8 KB)(link resides outside of IBM), which was published in 1950. In this paper, Turing, often referred to as the "father of computer science", asks the following question, "Can machines think?" From there, he offers a test, now famously known as the "Turing Test", where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since its publish, it remains an important part of the history of AI as well as an ongoing concept within philosophy as it utilizes ideas around linguistics.

Stuart Russell and Peter Norvig then proceeded to publish, Artificial Intelligence: A Modern Approach(link resides outside IBM), becoming one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting:

Human approach:

Ideal approach:

Alan Turings definition would have fallen under the category of systems that act like humans.

At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.

Today, a lot of hype still surrounds AI development, which is expected of any new emerging technology in the market. As noted in Gartners hype cycle (link resides outside IBM), product innovations like, self-driving cars and personal assistants, follow a typical progression of innovation, from overenthusiasm through a period of disillusionment to an eventual understanding of the innovations relevance and role in a market or domain. As Lex Fridman notes here (link resides outside IBM) in his MIT lecture in 2019, we are at the peak of inflated expectations, approaching the trough of disillusionment.

As conversations emerge around the ethics of AI, we can begin to see the initial glimpses of the trough of disillusionment. To read more on where IBM stands within the conversation around AI ethics, read more here.

Weak AIalso called Narrow AI or Artificial Narrow Intelligence (ANI)is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. Narrow might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some very robust applications, such as Apple's Siri, Amazon's Alexa, IBM Watson, and autonomous vehicles.

Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial general intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equaled to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)also known as superintelligencewould surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn't mean AI researchers aren't also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman, rogue computer assistant in 2001: A Space Odyssey.

Since deep learning and machine learning tend to be used interchangeably, its worth noting the nuances between the two. As mentioned above, both deep learning and machine learning are sub-fields of artificial intelligence, and deep learning is actually a sub-field of machine learning.

Deep learning is actually comprised of neural networks. Deep in deep learning refers to a neural network comprised of more than three layerswhich would be inclusive of the inputs and the outputcan be considered a deep learning algorithm. This is generally represented using the following diagram:

The way in which deep learning and machine learning differ is in how each algorithm learns. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required and enabling the use of larger data sets. You can think of deep learning as "scalable machine learning" as Lex Fridman noted in same MIT lecture from above. Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the hierarchy of features to understand the differences between data inputs, usually requiring more structured data to learn.

"Deep" machine learning can leverage labeled datasets, also known as supervised learning, to inform its algorithm, but it doesnt necessarily require a labeled dataset. It can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the hierarchy of features which distinguish different categories of data from one another. Unlike machine learning, it doesn't require human intervention to process data, allowing us to scale machine learning in more interesting ways.

There are numerous, real-world applications of AI systems today. Below are some of the most common examples:

The idea of 'a machine that thinks' dates back to ancient Greece. But since the advent of electronic computing (and relative to some of the topics discussed in this article) important events and milestones in the evolution of artificial intelligence include the following:

IBM has been a leader in advancing AI-driven technologies for enterprises and has pioneered the future of machine learning systems for multiple industries. Based on decades of AI research, years of experience working with organizations of all sizes, and on learnings from over 30,000 IBM Watson engagements, IBM has developed the AI Ladder for successful artificial intelligence deployments:

IBM Watson gives enterprises the AI tools they need to transform their business systems and workflows, while significantly improving automation and efficiency. For more information on how IBM can help you complete your AI journey, explore the IBM portfolio of managed services and solutions

Sign up for an IBMid and create your IBM Cloud account.

More:

What is Artificial Intelligence (AI)? | IBM

Posted in Ai | Comments Off on What is Artificial Intelligence (AI)? | IBM

How to mitigate bias in AI – VentureBeat

Posted: at 12:44 pm

Elevate your enterprise data technology and strategy at Transform 2021.

As the common proverb goes, to err is human. One day, machines may offer workforce solutions that are free from human decision-making mistakes; however, those machines learn through algorithms and systems built by programmers, developers, product managers, and software teams with inherent biases (like all other humans). In other words, to err is also machine.

Artificial intelligence has the potential to improve our lives in countless ways. However, since algorithms often are created by a few people and distributed to many, its incumbent upon the creators to build them in a way that benefits populations and communities equitably. This is much easier said than done no programmer can be expected to hold the full knowledge and awareness necessary to build a bias-free AI model, and further, the data gathered can be biased as a result of the way they are collected and the cultural assumptions behind those empirical methods. Fortunately, when building continuously learning AI systems of the future, there are ways to reduce that bias within models and systems. The first step is about recognition.

Its important to recognize that bias exists in the real world, in all industries and among all humans. The question to ask is not how to make bias go away but how to detect and mitigate such bias. Understanding this helps teams take accountability to ensure that models, systems, and data are incorporating inputs from a diverse set of stakeholders and samples.

With countless ways for bias to seep into algorithms and their applications, the decisions that impact models should not be made in isolation. Purposefully cultivating a workgroup of individuals from diversified backgrounds and ideologies can help inform decisions and designs that foster optimal and equitable outcomes.

Recently, the University of Cambridge conducted an evaluation of over 400 models attempting to detect COVID-19 faster via chest X-rays. The analysis found many algorithms had both severe shortcomings and a high risk of bias. In one instance, a model trained on X-ray images of adult chests was tested on a data set of X-rays from pediatric patients with pneumonia. Although adults experience COVID-19 at a higher rate than children, the model positively identified cases disproportionally. Its likely because the model weighted rib sizes in its analysis, when in fact, the most important diagnostic approach is to examine the diseased area of the lung and rule out other issues like a collapsed lung.

One of the bigger problems in model development is that the datasets rarely are made available due to the sensitive nature of the data, so its often hard to determine how a model is making a decision. This illustrates the importance of transparency and explainability in both how a model is created and its intended use. Having key stakeholders (i.g., clinicians, actuaries, data engineers, data scientists, care managers, ethicists, and advocates) developing a model in a single data view can remove several human biases that have persisted due to the siloed nature of healthcare.

Its also worth noting that diversity extends much further than the people creating algorithms. Fair algorithms test for bias in the underlying data in their models. In the case of the COVID-19 X-ray models, this was the Achilles heel. The data sampled and collected to build models can underrepresent certain groups whose outcomes we want to predict. Efforts must be made to build more complete samples with contributions from underrepresented groups to better represent populations.

Without developing more robust data sets and processes around how data is recorded and ingested, algorithms may amplify psychological or statistical bias from how the data was collected. This will negatively impact each step of the model-building process, such as the training, evaluation, and generalization phases. However, by including more people from different walks of life, the AI models built will have a broader understanding of the world, which will go a long way toward reducing the inherent biases of a single individual or homogeneous group.

It may surprise some engineers and data scientists, but lines of code can create unfairness in many ways. For example, Twitter automatically crops uploaded images to improve user experience, but its engineers received feedback that the platform was incorrectly missing or misidentifying certain faces. After multiple attempts to improve the algorithm, the team ultimately realized that image trimming was a decision best made by people. Choosing the argmax (largest predicted probability) for finally outputting predictions amplifies disparate impact. An enormous number of test data sets, as well as scenario-based testing, are needed to neutralize these concerns.

There will always be gaps in AI models, yet its important to maintain accountability for them and correct them. And fortunately, when teams detect potential biases with a base model that is built and performs sufficiently, existing methods can be used to de-bias the data. Ideally, models shouldnt run without having a proper continuous feedback loop where predicted outputs are reused to train new versions. When working with diverse teams, data, and algorithms, building feedback-aware AI can reduce the innate gaps where bias can sneak in, yet without the diversity of inputs, AI models will just re-learn from its bias.

If individuals and teams are cognizant of the existence of bias, then they have the necessary tools at the data, algorithm, and human levels to build a more responsible AI. The best solution is to be aware that these biases exist and maintain safety nets to address them for each project and model deployment. What tools or approaches do you use to create algorithm fairness in your industry? And most importantly, how do you define the purpose behind each model?

Akshay Sharmaisexecutive vice president of artificial intelligence at digital health company Sharecare.

View post:

How to mitigate bias in AI - VentureBeat

Posted in Ai | Comments Off on How to mitigate bias in AI – VentureBeat

AI is about to shake up music forever but not in the way you think – BBC Science Focus Magazine

Posted: at 12:44 pm

Take a hike, Bieber. Step aside, Gaga. And watch out, Sheeran. Artificial intelligence is here and its coming for your jobs.

Thats, at least, what you might think after considering the ever-growing sophistication of AI-generated music.

While the concept of machine-composed music has been around since the 1800s (computing pioneer Ada Lovelace was one of the first to write about the topic), the fantasy has become reality in the past decade, with musicians such as Francois Pachet creating entire albums co-written by AI.

Some have even used AI to create new music from the likes of Amy Winehouse, Mozart and Nirvana, feeding their back catalogue into a neural network.

Even stranger, this July, countries across the world will even compete in the second annual AI Song Contest, a Eurovision-style competition in which all songs must be created with the help of artificial intelligence. (In case youre wondering, the UK scooped more than nul points in 2020, finishing in a respectable 6th place).

But will this technology ever truly become mainstream? Will artificial intelligence, as artist Grimes fears, soon make musicians obsolete?

To answer these questions and more, we sat down withProf Nick Bryan-Kinns, director of the Media and Arts Technology Centre at Queen Mary University of London. Below he explains how AI music is composed, why this technology wont crush humanity creativity and how robots could soon become part of live performances.

Music AIs use neural networks that are really large sets of bits of computers that try and mimic how the brain works. And you can basically throw lots of music at this neural network and it learns patterns just like how the human brain does by repeatedly being shown things.

Whats tricky about todays neural networks is theyre getting bigger and bigger. And theyre becoming harder and harder for humans to understand what theyre actually doing.

Were getting to a point now where we have these essentially black boxes that we put music into and nice new music comes out. But we dont really understand the details of what its doing.

These neural networks also consume a lot of energy. If youre trying to train AI to analyse the last 20 years of pop music, for instance, youre chucking all that data in there and then using a lot of electricity to do the analysis and to generate a new song. At some point, were going to have to question whether the environmental impact is worth this new music.

Im a sceptic on this. A computer may be able to make hundreds of tracks easily, but there is still likely still a human selecting which ones they think are nice or enjoyable.

Theres a little bit of smoke and mirrors going on with AI music at the moment. You can throw in Amy Winehouses back catalogue into an AI and a load of music will come out. But somebody has to go and edit that. They have to decide which parts they like and which parts the AI needs to work on a bit more.

The problem is that were trying to train the AI to make music that we like, but were not allowing it to make music that it likes. Maybe the computer likes a different kind of music than we do. Maybe the future would just be all the AIs listening to music together without humans.

Im also kind of sceptic on that one as well. AI can generate lyrics that are interesting and have an interesting narrative flow. But lyrics for songs are typically based on peoples life experiences, whats happened to them. People write about falling in love, things that have gone wrong in their life or something like watching the sunrise in the morning. AIs dont do that.

Im a little bit sceptical that an AI would have that life experience to be able to communicate something meaningful to people.

Read more:

This is where I think the big shift will be mash-ups between different kinds of musical styles. Theres research at the moment that takes the content of one kind of music and putting it in the style of another kind of music, exploring maybe three or four different genres at once.

While its difficult to try these mash-ups in a studio with real musicians, an AI can easily try a million different combinations of genres.

People say this with every introduction of new technology into music. With the invention of the gramophone, for example, everybody was worried, saying it would be terrible and the end of music. But of course, it wasnt. It was just a different way of consuming music.

AI might allow more people to make music because its now much easier to make a professional sounding single using just even your phone than it was 10 or 20 years ago.

A woman interacts with an AI music conductor during the 2020 Internet Conference in Wuzhen, Zhejiang Province of China. Getty

At the moment, AI is like a tool. But in the near future, it could be more of a co-creator. Maybe it could help you out by suggesting some basslines, or give you some ideas for different lyrics that you might want to use based on the genres that you like.

I think the co-creation between the AI and the human as equal creative partners will be the really valuable part of this.

AI can create a pretty convincing human voice simulation these days. But the real question is why you would want it to sound like a human anyway. Why shouldnt the AI sound like an AI, whatever that is? Thats whats really interesting to me.

I think were way too fixated on getting the machines to sound like humans. It would be much more interesting to explore how it would make its own voice if it had the choice.

I love musical robots. A robot that can play music has been a dream for so many for over a century. And in the last maybe five or 10 years, its really started to come together where youve got the AI that can respond in real-time and youve got robots that can actually move in very sort of human and emotional ways.

The fun thing is not just the music that theyre making, but its the gestures that go with the music. They can nod their heads or tap their feet to the beat. People are now building robots that you can play with in real-time in a sort of band like situation.

Whats really interesting to me is that this combination of technology has come together where we can really feel like its a real living thing that were playing music with.

Yeah, for sure. I think thatd be great! It will be interesting to see what an audience makes of it. At the moment its quite fun to play as a musician with a robot. But is it really fun watching robots perform? Maybe it is. Just look at Daft Punk!

Nick Bryan-Kinns is director of the Media and Arts Technology Centre at Queen Mary University of London, and professor of Interaction Design. He is also a co-investigator at the UKRI Centre for Doctoral Training in AI for Music, and a senior member of the Association for Computing Machinery.

Read more about the science of music:

Go here to read the rest:

AI is about to shake up music forever but not in the way you think - BBC Science Focus Magazine

Posted in Ai | Comments Off on AI is about to shake up music forever but not in the way you think – BBC Science Focus Magazine