Alan Turing’s Everlasting Contributions to Computing, AI and Cryptography – NIST

An enigma machine on display outside the Alan Turing Institute entrance inside the British Library, London.

Credit: Shutterstock/William Barton

Suppose someone asked you to devise the most powerful computer possible. Alan Turing, whose reputation as a central figure in computer science and artificial intelligence has only grown since his untimely death in 1954, applied his genius to problems such as this one in an age before computers as we know them existed. His theoretical work on this problem and others remains a foundation of computing, AI and modern cryptographic standards, including those NIST recommends.

The road from devising the most powerful computer possible to cryptographic standards has a few twists and turns, as does Turings brief life.

Alan Turing

Credit: National Portrait Gallery, London

In Turings time, mathematicians debated whether it was possible to build a single, all-purpose machine that could solve all problems that are computable. For example, we can compute a cars most energy-efficient route to a destination, and (in principle) the most likely way in which a string of amino acids will fold into a three-dimensional protein. Another example of a computable problem, important to modern encryption, is whether or not bigger numbers can be expressed as the product of two smaller numbers. For example, 6 can be expressed as the product of 2 and 3, but 7 cannot be factored into smaller integers and is therefore a prime number.

Some prominent mathematicians proposed elaborate designs for universal computers that would operate by following very complicated mathematical rules. It seemed overwhelmingly difficult to build such machines. It took the genius of Turing to show that a very simple machine could in fact compute all that is computable.

His hypothetical device is now known as a Turing machine. The centerpiece of the machine is a strip of tape, divided into individual boxes. Each box contains a symbol (such as A,C,T, G for the letters of genetic code) or a blank space. The strip of tape is analogous to todays hard drives that store bits of data. Initially, the string of symbols on the tape corresponds to the input, containing the data for the problem to be solved. The string also serves as the memory of the computer. The Turing machine writes onto the tape data that it needs to access later in the computation.

Credit: NIST

The device reads an individual symbol on the tape and follows instructions on whether to change the symbol or leave it alone before moving to another symbol. The instructions depend on the current state of the machine. For example, if the machine needs to decide whether the tape contains the text string TC it can scan the tape in the forward direction while switching among the states previous letter was T and previous letter was not C. If while in state previous letter was T it reads a C, it goes to a state found it and halts. If it encounters the blank symbol at the end of the input, it goes to the state did not find it and halts. Nowadays we would recognize the set of instructions as the machines program.

It took some time, but eventually it became clear to everyone that Turing was right: The Turing machine could indeed compute all that seemed computable. No number of additions or extensions to this machine could extend its computing capability.

To understand what can be computed it is helpful to identify what cannot be computed. Ina previous life as a university professor I had to teach programming a few times. Students often encounter the following problem: My program has been running for a long time; is it stuck? This is called the Halting Problem, and students often wondered why we simply couldnt detect infinite loops without actually getting stuck in them. It turns out a program to do this is an impossibility. Turing showed that there does not exist a machine that detects whether or not another machine halts. From this seminal result followed many other impossibility results. For example, logicians and philosophers had to abandon the dream of an automated way of detecting whether an assertion (such as whether there are infinitely many prime numbers) is true or false, as that is uncomputable. If you could do this, then you could solve the Halting Problem simply by asking whether the statement this machine halts is true or false.

Turing went on to make fundamental contributions to AI, theoretical biology and cryptography. His involvement with this last subject brought him honor and fame during World War II, when he played a very important role in adapting and extending cryptanalytic techniques invented by Polish mathematicians. This work broke the German Enigma machine encryption, making a significant contribution to the war effort.

Turing was gay. After the war, in 1952, the British government convicted him for having sex with a man. He stayed out of jail only by submitting to what is now called chemical castration. He died in 1954 at age 41 by cyanide poisoning, which was initially ruled a suicide but may have been an accident according to subsequent analysis. More than 50 years would pass before the British government apologized and pardoned him (after years of campaigning by scientists around the world). Today, the highest honor in computer sciences is called the Turing Award.

Turings computability work provided the foundation for modern complexity theory. This theory tries to answer the question Among those problems that can be solved by a computer, which ones can be solved efficiently? Here, efficiently means not in billions of years but in milliseconds, seconds, hours or days, depending on the computational problem.

For example, much of the cryptography that currently safeguards our data and communications relies on the belief that certain problems, such as decomposing an integer number into its prime factors, cannot be solved before the Sun turns into a red giant and consumes the Earth (currently forecast for 4 billion to 5 billion years). NIST is responsible for cryptographic standards that are used throughout the world. We could not do this work without complexity theory.

Technology sometimes throws us a curve, such as the discovery that if a sufficiently big and reliable quantum computer is built it would be able to factor integers, thus breaking some of our cryptography. In this situation, NIST scientists must rely on the worlds experts (many of them in-house) in order to update our standards. There are deep reasons to believe that quantum computers will not be able to break the cryptography that NIST is about to roll out. Among these reasons is that Turings machine can simulate quantum computers. This implies that complexity theory gives us limits on what a powerful quantum computer can do.

But that is a topic for another day. For now, we can celebrate how Turing provided the keys to much of todays computing technology and even gave us hints on how to solve looming technological problems.

Visit link:
Alan Turing's Everlasting Contributions to Computing, AI and Cryptography - NIST

Julian Assange is my husband his extradition is an abomination – The Independent

Last Friday, home secretary Priti Patel gave her approval for the UK to send my husband, Julian Assange, to the country that plotted his assassination.

Julian remains imprisoned in Belmarsh after more than three years at the behest of US prosecutors. He faces a prison sentence of up to 175 years for arguably the most celebrated publications in the history of journalism.

Patels decision to extradite Julian has sent shockwaves across the journalism community. The home secretary flouted calls from representatives of the Council of Europe, the OSCE, almost 2000 journalists and 300 doctors for the extradition to be halted.

When Julian calls around the childrens bed time, they talk over each other boisterously. The calls only last 10 minutes, so when the call ended abruptly the other night Max, who is three, asked tearfully if it was because hed been naughty, I absentmindedly said it wasnt his fault, but Mike Pompeos. Five-year-old Gabriel asked: Who is Mike Pompeo?

Mike Pompeo had been on my mind, because while the home secretary in this country was busy signing Julians extradition order, in Spain a High Court judge was summoning Pompeo for questioning regarding his role as director of the CIA in their reported plots to murder my husband.

While at the helm of the CIA, President Trumps most loyal supporter reportedly tasked his agents with preparing sketches and options for the assassination of their father.

The citation for Pompeo to appear before a Spanish judge comes out of an investigation into illicit spying of Julian and his lawyers through a company registered in Spain. Spanish police seized large amounts of electronic data, and insiders involved in carrying out the clandestine operations testified that they acted on instruction of the CIA. They had discussed abducting and poisoning Julian.

Gabriel was six months old at the time and had been a target too. One witness was instructed to obtain DNA swabs from a soiled nappy in order to establish that Julian was his father. Another admitted to planting hidden microphones under the fire extinguishers to tap legally privileged meetings between Julian and his lawyers.

The recordings of Julians legal meetings in the Ecuadorian embassy in London were physically transported to handlers in the United States on a regular basis. A break-in at Julians lawyers office was caught on camera, and investigators discovered photographs of Julians lawyers legal papers taken inside the embassy. The operations targeting his lawyers read like they are taken from a Soviet playbook.

Across the pond, ever since the Nixon administrations attempted prosecution of the New York Times over the Pentagon Papers over half a century ago, constitutional lawyers had been warning that the 1917 Espionage Act would one day be abused to prosecute journalists.

It was President Obamas administration that enlivened the creeping misuse of the Espionage Act. More journalistic sources were charged under the Act than all previous administrations combined, including WikiLeaks source Chelsea Manning; CIA torture whistleblower John Kiriakou; and NSA spying whistleblower Edward Snowden.

Following massive public pressure Obama commuted Chelsea Mannings 35-year sentence. Obama declined to prosecute Julian for publishing Mannings leaks because of the implications for press freedom.

After the Obama administrations Espionage Act charging spree, it was just a matter of time before another administration expanded the interpretation of the Act even further.

That day came soon enough. Trumps administration broke new legal ground with the indictment of Julian for receiving, possessing, and publishing the Manning leaks. Meanwhile in Langley, Virginia, Pompeo tasked CIA assassination plans.

To keep up to speed with all the latest opinions and comment sign up to our free weekly Voices Dispatches newsletter by clicking here

Priti Patels decision comes amidst sweeping government reforms of an increasingly totalitarian bent the plans to weaken the influence of the European Court of Human Rights and the decision to extradite Julian are the coup de grace.

The home secretarys proposed reforms to the UKs Official Secrets Act largely track the Trump-era indictment against Julian: publishers and their sources can be charged as criminal co-conspirators.

Julians extradition case itself creates legal precedent. What has long been understood to be a bedrock principle of democracy, press freedom, will disappear in one fell swoop.

As it stands, no journalist is going to risk having what Julian is being subjected to happen to them. Julian must be freed before its too late. His life depends on it. Your rights depend on it.

View post:
Julian Assange is my husband his extradition is an abomination - The Independent

Exploring emerging topics in artificial intelligence policy | MIT News | Massachusetts Institute of Technology – MIT News

Members of the public sector, private sector, and academia convened for the second AI Policy Forum Symposium last month to explore critical directions and questions posed by artificial intelligence in our economies and societies.

The virtual event, hosted by the AI Policy Forum (AIPF) an undertaking by the MIT Schwarzman College of Computing to bridge high-level principles of AI policy with the practices and trade-offs of governing brought together an array of distinguished panelists to delve into four cross-cutting topics: law, auditing, health care, and mobility.

In the last year there have been substantial changes in the regulatory and policy landscape around AI in several countries most notably in Europe with the development of the European Union Artificial Intelligence Act, the first attempt by a major regulator to propose a law on artificial intelligence. In the United States, the National AI Initiative Act of 2020, which became law in January 2021, is providing a coordinated program across federal government to accelerate AI research and application for economic prosperity and security gains. Finally, China recently advanced several new regulations of its own.

Each of these developments represents a different approach to legislating AI, but what makes a good AI law? And when should AI legislation be based on binding rules with penalties versus establishing voluntary guidelines?

Jonathan Zittrain, professor of international law at Harvard Law School and director of the Berkman Klein Center for Internet and Society, says the self-regulatory approach taken during the expansion of the internet had its limitations with companies struggling to balance their interests with those of their industry and the public.

One lesson might be that actually having representative government take an active role early on is a good idea, he says. Its just that theyre challenged by the fact that there appears to be two phases in this environment of regulation. One, too early to tell, and two, too late to do anything about it. In AI I think a lot of people would say were still in the too early to tell stage but given that theres no middle zone before its too late, it might still call for some regulation.

A theme that came up repeatedly throughout the first panel on AI laws a conversation moderated by Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and chair of the AI Policy Forum was the notion of trust. If you told me the truth consistently, I would say you are an honest person. If AI could provide something similar, something that I can say is consistent and is the same, then I would say it's trusted AI, says Bitange Ndemo, professor of entrepreneurship at the University of Nairobi and the former permanent secretary of Kenyas Ministry of Information and Communication.

Eva Kaili, vice president of the European Parliament, adds that In Europe, whenever you use something, like any medication, you know that it has been checked. You know you can trust it. You know the controls are there. We have to achieve the same with AI. Kalli further stresses that building trust in AI systems will not only lead to people using more applications in a safe manner, but that AI itself will reap benefits as greater amounts of data will be generated as a result.

The rapidly increasing applicability of AI across fields has prompted the need to address both the opportunities and challenges of emerging technologies and the impact they have on social and ethical issues such as privacy, fairness, bias, transparency, and accountability. In health care, for example, new techniques in machine learning have shown enormous promise for improving quality and efficiency, but questions of equity, data access and privacy, safety and reliability, and immunology and global health surveillance remain at large.

MITs Marzyeh Ghassemi, an assistant professor in the Department of Electrical Engineering and Computer Science and the Institute for Medical Engineering and Science, and David Sontag, an associate professor of electrical engineering and computer science, collaborated with Ziad Obermeyer, an associate professor of health policy and management at the University of California Berkeley School of Public Health, to organize AIPF Health Wide Reach, a series of sessions to discuss issues of data sharing and privacy in clinical AI. The organizers assembled experts devoted to AI, policy, and health from around the world with the goal of understanding what can be done to decrease barriers to access to high-quality health data to advance more innovative, robust, and inclusive research results while being respectful of patient privacy.

Over the course of the series, members of the group presented on a topic of expertise and were tasked with proposing concrete policy approaches to the challenge discussed. Drawing on these wide-ranging conversations, participants unveiled their findings during the symposium, covering nonprofit and government success stories and limited access models; upside demonstrations; legal frameworks, regulation, and funding; technical approaches to privacy; and infrastructure and data sharing. The group then discussed some of their recommendations that are summarized in a report that will be released soon.

One of the findings calls for the need to make more data available for research use. Recommendations that stem from this finding include updating regulations to promote data sharing to enable easier access to safe harbors such as the Health Insurance Portability and Accountability Act (HIPAA) has for de-identification, as well as expanding funding for private health institutions to curate datasets, amongst others. Another finding, to remove barriers to data for researchers, supports a recommendation to decrease obstacles to research and development on federally created health data. If this is data that should be accessible because it's funded by some federal entity, we should easily establish the steps that are going to be part of gaining access to that so that it's a more inclusive and equitable set of research opportunities for all, says Ghassemi. The group also recommends taking a careful look at the ethical principles that govern data sharing. While there are already many principles proposed around this, Ghassemi says that obviously you can't satisfy all levers or buttons at once, but we think that this is a trade-off that's very important to think through intelligently.

In addition to law and health care, other facets of AI policy explored during the event included auditing and monitoring AI systems at scale, and the role AI plays in mobility and the range of technical, business, and policy challenges for autonomous vehicles in particular.

The AI Policy Forum Symposium was an effort to bring together communities of practice with the shared aim of designing the next chapter of AI. In his closing remarks, Aleksander Madry, the Cadence Designs Systems Professor of Computing at MIT and faculty co-lead of the AI Policy Forum, emphasized the importance of collaboration and the need for different communities to communicate with each other in order to truly make an impact in the AI policy space.

The dream here is that we all can meet together researchers, industry, policymakers, and other stakeholders and really talk to each other, understand each other's concerns, and think together about solutions, Madry said. This is the mission of the AI Policy Forum and this is what we want to enable.

Read this article:
Exploring emerging topics in artificial intelligence policy | MIT News | Massachusetts Institute of Technology - MIT News

Can Artificial Intelligence Be Creative? – Discovery Institute

Image: Lady Ada Lovelace (18151852), via Wikimedia Commons.

Editors note: We are delighted to present an excerpt from Chapter 2 of the new bookNon-Computable You: What You Do that Artificial Intelligence Never Will, by computer engineer Robert J. Marks, director of Discovery Institutes Bradley Center for Natural and Artificial Intelligence.

Some have claimed AI is creative. But creativity is a fuzzy term. To talk fruitfully about creativity, the term must be defined so that everyone is talking about the same thing and no one is bending the meaning to fit their purpose. Lets explore what creativity is, and it will become clear that, properly defined, AI is no more creative than a pencil.

Lady Ada Lovelace (18151852), daughter of the poet George Gordon, Lord Byron, was the first computer programmer, writing algorithms for a machine that was planned but never built. She also was quite possibly the first to note that computers will not be creative that is, they cannot create something new. She wrote in 1842 that the computer has no pretensions whatever to originate anything. It can do [only] whatever we know how to order it to perform.

Alan Turing disagreed. Turing is often called the father of computer science, having established the idea for modern computers in the 1930s. Turing argued that we cant even be sure that humans create, because humans do nothing new under the sun but they do surprise us. Likewise, he said, Machines take me by surprise with great frequency.So perhaps, he argued, it is the element of surprise thats relevant, not the ability to originate something new.

Machines can surprise us if theyre programmed by humans to surprise us, or if the programmer has made a mistake and thus experienced an unexpected outcome.Often, though, surprise occurs as a result of successful implementation of a computer search that explores a myriad of solutions for a problem. The solution chosen by the computer can be unexpected. The computer code that searches among different solutions, though, is not creative. The creativity credit belongs to the computer programmer who chose the set of solutions to be explored. One could give examples from computer searches for making the best move in the game of GO and for simulated swarms. Both results are surprising and unexpected, but there is no creativity contributed from computer code.

Alan Turing, an atheist, wanted to show we are machines and that computers could be creative. Turing equated intelligence with problem solving, did not consider questions of consciousness and emotion, and referred to people as human computers. Turings version of the imitation game was proposed to show that computers could duplicate the conversational human. This is why the biographical movie starring Benedict Cumberbatch as Turing was titledThe Imitation Game.

How can computers imitate humans, according to Turing?The imitation game (which came to be called the Turing test)simply asks whether, in a conversational exchange using text (that is, an exchange in which the participants are hidden from each other), a sufficiently sophisticated computer can be distinguished from a human. If a questioner gets lucid, human-sounding answers from the computer, and believes the computer is in fact a human typing in answers from another room, then the test has been passed. (Incidentally, the converse of the Turing test is easy. Simply ask it to calculate the cube root of 12 out to 10 significant figures. If the answer is almost immediate, you are talking to a computer.)

There are those who claim that todays chatbots have passed the Turing test. Most famously, a computer program known as Eugene Goostman purportedly passed. One posted subtitle claiming this reads, Eugene Goostman fools 33% of interrogators into thinking it is human, in what is seen as a milestone in artificial intelligence.

But those making a test often cheat to tweak the outcome. Thats what Goostmans programmers did. Goostman supposedly was a Ukrainian, which meant English was not his first language, so his poor grammar could be excused. He was supposedly a teenager, so any lack of depth of understanding could be chalked up to his nave intellect. Likewise, his tendency to be silly and deflect questions could be considered typical teenage behavior. In short, Goostmans software character was gamed to pass the Turing test.

Here are a couple of examples of an exchange between Goostman and his questioner. Goostman is first asked a question proposed in the first line of the lyrics of the song Close to You,popularized by theCarpenters:

Questioner: Why do birds suddenly appear?

Goostman: Just because 2 plus 2 is 5! By the way, whats your occupation? I mean could you tell me about your work?

We see here an often-used cheat in chatbots. If an answer is not known, the chatbot responds with an unrelated question to derail the conversations direction.

Heres another exchange with Eugene Goostman:

Questioner: It is ok, I get sick of sick people. How is your stomach feeling today? Is it upset maybe?

Goostman: I think you cant smile at all. I bet you work in a funeral agency.

As you can see, Goostmans answers here are elusively non-responsive.

Selmer Bringsjord correctly notes the Turing test is gamed by programmers. Gamed here is a nice word for being an elusive cheat. As Bringsjord writes, Though progress toward Turings dream is being made, its coming only on the strength ofclever but shallow trickery.

When gaming the system, chatbots can deflect detection by answering questions with other questions, giving evasive answers, or admitting ignorance. They display general intellectual shallowness as regards creativity and depth of understanding.

Goostman answered questions with questions like, By the way, whats your occupation? He also tried to change topics with conversational whiplash responses like I bet you work in a funeral agency. These are examples of the clever but shallow trickery Bringsjord criticized.

What, then, do Turing tests prove? Only that clever programmers can trick gullible or uninitiated people into believing theyre interacting with a human. Mistaking something for human does not make it human. Programming to shallowly mimic thought is not the same thing as thinking. Rambling randomness (such as the change-of-topic questions Goostman spit out) does not display creativity.

I propose to consider the question, Can machines think? Turing said. Ironically, Turing not only failed in his attempt to show that machines can be conversationally creative, but also developed computer science that shows humans are non-computable.

See more here:
Can Artificial Intelligence Be Creative? - Discovery Institute

Worldwide Artificial Intelligence (AI) in Drug Discovery Market to reach $ 4.0 billion by 2027 at a CAGR of 45.7% – ResearchAndMarkets.com – Business…

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence (AI) in Drug Discovery Market by Component (Software, Service), Technology (ML, DL), Application (Neurodegenerative Diseases, Immuno-Oncology, CVD), End User (Pharmaceutical & Biotechnology, CRO), Region - Global forecast to 2024" report has been added to ResearchAndMarkets.com's offering.

The Artificial intelligence/AI in drug discovery Market is projected to reach USD 4.0 billion by 2027 from USD 0.6 billion in 2022, at a CAGR of 45.7% during the forecast period. The growth of this market is primarily driven by factors such as the need to control drug discovery & development costs and reduce the overall time taken in this process, the rising adoption of cloud-based applications and services. On the other hand, the inadequate availability of skilled labor is key factor restraining the market growth at certain extent over the forecast period.

Services segment is estimated to hold the major share in 2022 and also expected to grow at the highest over the forecast period

On the basis of offering, the AI in drug discovery market is bifurcated into software and services. the services segment expected to account for the largest market share of the global AI in drug discovery services market in 2022, and expected to grow fastest CAGR during the forecast period. The advantages and benefits associated with these services and the strong demand for AI services among end users are the key factors for the growth of this segment.

Machine learning technology segment accounted for the largest share of the global AI in drug discovery market

On the basis of technology, the AI in drug discovery market is segmented into machine learning and other technologies. The machine learning segment accounted for the largest share of the global market in 2021 and expected to grow at the highest CAGR during the forecast period. High adoption of machine learning technology among CRO, pharmaceutical and biotechnology companies and capability of these technologies to extract insights from data sets, which helps accelerate the drug discovery process are some of the factors supporting the market growth of this segment.

Pharmaceutical & biotechnology companies segment expected to hold the largest share of the market in 2022

On the basis of end user, the AI in drug discovery market is divided into pharmaceutical & biotechnology companies, CROs, and research centers and academic & government institutes. In 2021, the pharmaceutical & biotechnology companies segment accounted for the largest share of the AI in drug discovery market. On the other hand, research centers and academic & government institutes are expected to witness the highest CAGR during the forecast period. The strong demand for AI-based tools in making the entire drug discovery process more time and cost-efficient is the key growth factor of pharmaceutical and biotechnology end-user segment.

Key Topics Covered:

1 Introduction

2 Research Methodology

3 Executive Summary

4 Premium Insights

4.1 Growing Need to Control Drug Discovery & Development Costs is a Key Factor Driving the Adoption of AI in Drug Discovery Solutions

4.2 Services Segment to Witness the Highest Growth During the Forecast Period

4.3 Deep Learning Segment Accounted for the Largest Market Share in 2021

4.4 North America is the Fastest-Growing Regional Market for AI in Drug Discovery

5 Market Overview

5.1 Introduction

5.2 Market Dynamics

5.2.1 Market Drivers

5.2.1.1 Growing Number of Cross-Industry Collaborations and Partnerships

5.2.1.2 Growing Need to Control Drug Discovery & Development Costs and Reduce Time Involved in Drug Development

5.2.1.3 Patent Expiry of Several Drugs

5.2.2 Market Restraints

5.2.2.1 Shortage of AI Workforce and Ambiguous Regulatory Guidelines for Medical Software

5.2.3 Market Opportunities

5.2.3.1 Growing Biotechnology Industry

5.2.3.2 Emerging Markets

5.2.3.3 Focus on Developing Human-Aware AI Systems

5.2.3.4 Growth in the Drugs and Biologics Market Despite the COVID-19 Pandemic

5.2.4 Market Challenges

5.2.4.1 Limited Availability of Data Sets

5.3 Value Chain Analysis

5.4 Porter's Five Forces Analysiss

5.5 Ecosystem

5.6 Technology Analysis

5.7 Pricing Analysis

5.8 Business Models

5.9 Regulations

5.10 Conferences and Webinars

5.11 Case Study Analysis

6 Artificial Intelligence in Drug Discovery Market, by Offering

7 Artificial Intelligence in Drug Discovery Market, by Technology

8 Artificial Intelligence in Drug Discovery Market, by Application

9 Artificial Intelligence in Drug Discovery Market, by End-user

10 Artificial Intelligence in Drug Discovery Market, by Region

11 Competitive Landscape

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/q5pvns

See the article here:
Worldwide Artificial Intelligence (AI) in Drug Discovery Market to reach $ 4.0 billion by 2027 at a CAGR of 45.7% - ResearchAndMarkets.com - Business...

Glorikian’s New Book Sheds Light on Artificial Intelligence Advances in the Healthcare Field – The Armenian Mirror-Spectator

After describing various ways in which AI and big data are involved already in our daily lives, ranging from the food we eat, the cars we drive and the things we buy, he concludes that it is leading to the Fourth Industrial Revolution, a phrase coined by Klaus Schwab, the head of the World Economic Forum. All aspects of life will be transformed in a way analogous to the prior industrial revolutions (first the use of steam and waterpower, second the expansion of electricity and telegraph cables, and third, the digital revolution of the end of the 20th century).

At the heart of the book are the chapters in which he explains what data and AI have already accomplished for our health and what they can do in the future. The ever-expanding amount of personal data available combined with advances in AI allows for increasing accuracy of diagnoses, treatments and better sensors and software. Glorikian notes that today there are over 350,000 different healthcare apps and the mobile health market is expected to approach $290 billion in revenue by 2025.

Glorikian employs a light, informal style of writing, with references to pop culture such as Star Trek. He asks the reader questions and intersperses each chapter with what he calls sidebars. They are short illustrative stories or sets of examples. For example, AI Saved My Life: The Watch That Called 911 for a Fallen Cyclist (p. 68) starts with a man who lost consciousness after falling off his bike, and then lists other ways current phones can save lives. Other sidebars explain basic concepts like the meaning of genes and DNA; or about gene editing with CRISPR.

Present and Future Advances

Before getting into more complex issues, Glorikian describes what be most familiar to readers: the use of AI-enabled smartphone apps which guide individuals towards optimal diets and exercising as well as allow for group activities through remote communication and virtual reality. There are already countless AI-enabled smartphone apps and sensors allowing us to track our movements and exercise, as well as our diets, sleep and even stress levels. In the future, their approach will become more tailored to individual needs and data, including genomics, environment, lifestyle and molecular biology, with specific recommendations.

He speculates as to what innovations the near future may bring, remarking: What isnt clear is just how long it will take us to move from this point of collecting and finding patterns in the data, to one where we (and our healthcare providers are actively using those patterns to make accurate predications about our health. He gives the example of having an app to track migraine headaches, which can find and analyze patterns in the data (do they occur on nights when you have eaten a particular kind of food or traveled on a plane, for example). Eventually, at a more advanced stage, it might suggest you take an earlier flight or eat in a different restaurant that does not use ingredients that might be migraine triggers for you.

Healthcare will become more decentralized, Glorikian predicts, with people no longer forced to wait hours in hospital emergency rooms. Instead, some issues can be determined through phone apps and remote specialists, and others can be handled at rapid care facilities or pharmacies. Hospitals themselves will become more efficient with command centers monitoring the usage of various resources and using AI to monitor various aspects of patient health. Telerobotics will allow access to specialized surgeons located in major urban centers even if there are none in the local hospital.

In the chapter on genetics, Glorikian presents three ways in which unlocking the secrets of an individuals genome can have practical health consequences right now. The first is the prevention of bad drug reactions through pharmacogenomics, or learning how genes affect response to drugs. Second are enhanced screening and preventative treatment for hereditary cancer syndromes. One major advancement just starting to be used more, notes Glorikian, is liquid biopsy, in which a blood sample allows identification of tumor cells as opposed to standard physical biopsies. It is less invasive and sometimes more accurate for detecting cancers prior to the appearance of symptoms. The third way is DNA sequencing at birth to screen for many disorders which are treatable when caught early. The future may see corrections of various mutations through gene editing.

He points out the various benefits in the health field of collecting large sets of data. For example, it allows the use of AI or machine learning to better read mammogram results and to better predict which patients would see benefit from various procedures like cardiac resynchronization therapy or who had greater risk for cardiovascular disease. There is hope that this approach can help detect the start and the progression of diseases like Alzheimers or diabetic retinopathy. Ultimately it may even be able to predict fairly reliably when individuals would die.

At present, AI accessing sufficient data is helping identify new drugs, saving time and money by using statistical models to predict whether the new drugs will work even before trials. AI can determine which variables or dimensions to remove when making complex computations of models in order to speed up computational processes. This is important when there are large numbers of variables and vast amounts of data.

Glorikian does not miss the opportunity to use the current Covid-19 crisis as a teaching moment. In a chapter called Solving the Pandemic Problem, Glorikian discusses the role AI, machine learning and big data played in the fight against the coronavirus pandemic, in spotting it early on, predicting where it might travel next, sequencing its genome in days, and developing diagnostic tests, vaccines and treatments. Vaccine development, like drug development, is much faster today than even 20 years ago, thanks to computational modeling and virtual clinical trials and studies.

Potential Problems

Glorikian does not shy away from raising some of the potential problems associated with the wide use of AI in medicine, such as the threat to patient privacy and ethical questions about what machines should be allowed to do. Should genetic editing be allowed in humans for looks, intelligence or various types of talents? Should AI predictions of lifespan and dates of death be used? What types of decisions should machines be allowed to make in healthcare? And what sort of triage should be allowed in case of limited medical resources (if AI predicts one patient is for example ten times more likely to die than another despite medical intervention)? There are grave dangers if hackers access databanks or medical machines.

There are also potential operational problems with using data as a basis for AI, such as outdated information, biased data, missing data (and how it is handled), misanalyzed or differently analyzed data.

Despite all these issues, Glorikian is optimistic about the value of AI. He concludes, But despite the risk, for the most part, the benefits outweigh the potential downsidesThe data we willingly give up makes our lives better.

Armenian Connection

When asked at the end of June, 2022 how Armenia compares with the US and other parts of the world in the use of AI in healthcare, he made the distinction between the Armenian healthcare system and Armenian technology that is directed at the world healthcare system.

On the one hand, he said, I dont know of a lot that is being incorporated into the healthcare system, although we do have a national electronic medical record system that they have really been improving on a consistent basis. Having a call management record system throughout the country will provide data for the next step in use of AI, and that, he said is very exciting.

On the other hand, for technology companies involved in healthcare and biotechnology in Armenia, he said, I would always like to see more, but there are some really interesting companies that have sprouted up over the last five years. Also, with the tech giant NVDIA opening up a research center in Armenia, Glorikian said he hoped there will be interesting synergies since this company does invest in the healthcare area.Harry Glorikian, second from left, next to Acting Prime Minister Nikol Pashinyan, in a December 19, 2018 Yerevan meeting

At the end of 2018, Glorikian met with then Acting Prime Minister Nikol Pashinyan to discuss launching the Armenian Genome project to expand the scope of genetic studies in the field of healthcare. He said that this undertaking was halted for reasons beyond his understanding. He said, My lesson learned was you can move a lot faster and have significant impact by focusing on the private sector.

Indeed, this is what he does, as an individual investor and as a member of the Angel Investor Club of Armenia. While the group looks at a broad range of companies, mainly technology driven, he and a few other people in it take a look at those which are involved in healthcare. In fact, he is going to California at the very end of June to learn more about a robot companion for children called Moxie, prepared by Embodied, Inc., a company founded by veteran roboticist Paolo Pirjanian. Pirjanian, who was a guest on Glorikians podcast several weeks ago, lives in California, but Glorikian said that the back end of his companys work is done in Armenia.

Glorikian added that he is always finding out about or running into Armenians in the diaspora doing work with AI.

Changes

When asked what has changed since the publication of the book last year, he replied, Things are getting better! While hardware does not change overnight, he said that there have been incremental improvements to software during the period of time it took to write the book and then have it published. He said, For someone reading the book now, you are probably saying, I had no idea that this was even available. For someone like me, you already feel a little behind.

Readers of the book have already begun to contact Glorikian with anecdotes about what it led them to find out and do. He hopes the book will continue to reach more people. He said, The biggest thing I get out of it is when someone says I learned this and I did something about it. When individuals have access to more quantifiable data, not only can they manage their own health better, but they also provide their doctors with more data longitudinally that helps the doctor to be more effective. Glorikian said this should have a corollary effect of deflating healthcare costs in the long run.

One minor criticism of the book, at least of the paperback version that fell into the hands of this reviewer, is the poor quality of some of the images used. The text which is part of those illustrations is very hard to read. Otherwise, this is a very accessible read for an audience of varying backgrounds seeking basic information on the ongoing transformations in healthcare through AI.

Read the rest here:
Glorikian's New Book Sheds Light on Artificial Intelligence Advances in the Healthcare Field - The Armenian Mirror-Spectator

Deep Dive Into Advanced AI and Machine Learning at The Behavox Artificial Intelligence in Compliance and Security Conference – Business Wire

MONTREAL--(BUSINESS WIRE)--On July 19th, Behavox will host a conference to share the next generation of artificial intelligence in Compliance and Security with clients, regulators, and industry leaders.

The Behavox AI in Compliance and Security Conference will be held at the company HQ in Montreal. With this exclusive in-person conference, Behavox is relaunching its pre-COVID tradition of inviting customers, regulators, AI industry leaders, and partners to its Montreal HQ to deep dive into workshops and keynote speeches on compliance, security, and artificial intelligence.

Were extremely excited to relaunch our tradition of inviting clients to our offices in order to learn directly from the engineers and data scientists behind our groundbreaking innovations, said Chief Customer Intelligence Officer Fahreen Kurji. Attendees at the conference will get to enjoy keynote presentations as well as Innovation Paddocks where you can test drive our latest innovations and also spend time networking with other industry leaders and regulators.

Keynote presentations will cover:

The conference will also feature Innovation Paddocks where guests will be able to learn more from the engineers and data scientists behind Behavox innovations. At this conference, Behavox will demonstrate its revolutionary new product - Behavox Quantum. There will be test drives and numerous workshops covering everything from infrastructure for cloud orchestration to the AI engine at the core of Behavox Quantum.

Whats in it for participants?

Behavox Quantum has been rigorously tested and benchmarked against existing solutions in the market and it outperformed competition by at least 3,000x using new AI risk policies, providing a holistic security program to catch malicious, immoral, and illegal actors, eliminating fraud and protecting your digital headquarters.

Attendees at the July 19th conference will include C-suite executives from top global banks, financial institutions, and corporations with many prospects and clients sending entire delegations to the conference. Justin Trudeau, Canadian Prime Minister, will give the commencement speech at the conference in recognition/ celebration of the world leading AI innovations coming out of Canada.

This is a unique opportunity to test drive the product and meet the team behind the innovations as well as network with top industry professionals. Register here for the Behavox AI in Compliance and Security Conference.

About Behavox Ltd.

Behavox provides a suite of security products that help compliance, HR, and security teams protect their company and colleagues from business risks.

Through AI-powered analysis of all corporate communications, including email, instant messaging, voice, and video conferencing platforms, Behavox helps organizations identify illegal, immoral, and malicious behavior in the workplace.

Founded in 2014, Behavox is headquartered in Montreal and has offices in New York City, London, Seattle, Singapore, and Tokyo.

More information about the company is available at https://www.behavox.com/.

See original here:
Deep Dive Into Advanced AI and Machine Learning at The Behavox Artificial Intelligence in Compliance and Security Conference - Business Wire

What’s Your Future of Work Path With Artificial Intelligence? – CMSWire

What does the future of artificial intelligence in the workplace look like for employee experience?

Over last few years, artificial intelligence (AI) has become a very significant part of business operations across all industries. Its already making an impact as part of our daily lives, from appliances, voice assistants, search, surveillance, marketing, autonomous vehicles, video games, TVs, to large sporting events.

AI is the result of applying cognitive science techniques to emulate human intellect and artificially create something that performs tasks that only humans can perform, like reasoning, natural communication and problem-solving. It does this by leveraging machine learning technique by reading and analyzing large data sets to identify patterns, detect anomalies and make decisions with no human intervention.

In this ever-evolving market, AI has become super crucial for businesses to upscale workplace infrastructure and improve employee experience. According to Precedence Research, the AI market size is projected to surpass around $1,597.1 billion by 2030, and is expanding growth at a CAGR of 38.1% from 2022 to 2030.

Currently, AI is being used in the workplace to automate jobs that are repetitive or require a high degree of precision, like data entry or analysis. AI can also be used to make predictions about customer behavior or market trends.

In the future, AI is expected to increasingly be used to augment human workers, providing them with recommendations or suggestions based on the data that it has been programmed to analyze.

Todays websites are capable of using AI to quickly detect potential customer intent in real-time based on interactions by the online visitor, and to show more engaging and personalized content to enhance the possibility of converting customers. As AI continues to develop, its capabilities in the workplace are expected to increase, making it an essential tool for businesses looking to stay ahead of the competition.

Kai-Fu Lee, a famous computer scientist, businessman and writer, said in a 2019 interview with CBS News, that he believes 40% of the worlds jobs will be replaced by robots capable of automating tasks.

AI has a potential to replace many types of jobs that involve mechanical or structured tasks that are repetitive in nature. Some opportunities we are seeing now are robotic vehicles, drones, surgical devices, logistics, call centers, administrative tasks like housekeeping, data entry and proofreading. Even armies of robots for security and defense are being discussed.

That said, AI is going to be a huge disruption worldwide over the next decade or so. Most innovations come from disruptions; take COVID-19 pandemic as an example, it dramatically changed how we work now.

While AI takes some jobs, it is also creates many opportunities. When it comes to strategic thinking, creativity, emotions and empathy, humans will always win over machines. This rings the bell to adapt with the change and grow human factors in workplace in all possible dimensions. Nokia and Blackberry mobile phones, Kodak cameras are the living examples of failing by not acknowledging the digital disruption. Timely market research, using the right technology and enabling the workforce to adapt for change can bring success to businesses through digital transformation.

Related Article:What's Next for Artificial Intelligence in Customer Experience?

There will be changes in the traditional means of doing things, and more jobs will be generated. AI has the potential to revolutionize the workplace, transforming how we do everything from customer service to driving cars in one of the busiest places like downtown San Francisco. However, there are still several challenges that need to be overcome before AI can be widely implemented in the workplace.

One of the biggest challenges is developing algorithms that can reliably replicate human tasks. This is often difficult because human tasks often involve common sense and reasoning, which are difficult for computers to understand. We should also ensure that AI systems are fair and unbiased. This is important because AI systems are often used to make decisions about things like hiring and promotions, and if they are biased then this can lead to discrimination. We live in the world of diversity, equity, and inclusion (DEI), and mistakes with AI can be costly for businesses. It may take a very long time to develop a customer-centric model that is completely dependent on AI, one that is reliable and trustworthy.

The future of AI is hard to predict, but there are a few key trends that are likely to shape its development. The increasing availability of data will allow AI systems to become more accurate and efficient, and as businesses and individuals rely on AI more and more, a need for new types of AI applications means more work and jobs. As these trends continue, AI is likely to have a significant impact on the workforce. It can very well lead to the automation of many cognitive tasks, including those that are currently performed by human workers.

This could result in a reduction in the overall demand for labor as well as an increase in the need for workers with skills that complement the AI systems. AI is the future of work; there's no doubt about that, but how it will shape the future of human workforce remains to be seen.

Many are worried that AI will remove many jobs, while others see it as an opportunity to increase efficiency and accuracy in the workforce. No matter which side you're on, it's important to understand how AI is changing the way we work and what that means for the future.

Related Article: 8 Examples of Artificial Intelligence in the Workplace

Let's look at few real-world examples that are already changing the way of work:

All above implementations look great. However, it is important to note that AI should be used as a supplement to human intelligence, not a replacement for it. When used properly, AI can help businesses thrive. The role of AI in the workplace is ever evolving, and it will be interesting to see how businesses adopt these technologies and improve the overall work environment to provide the best employee experience.

AnOctober 2020 Gallup pollfound that 51% of workers are not engaged they are psychologically unattached to their work and company.

Here are some employee experience aspects that AI could improve:

Employees need to know and trust that you have their best interests in mind. The value of AI in human resources is going to be critical to deliver employee experiences along with human connection and values.

Read more here:
What's Your Future of Work Path With Artificial Intelligence? - CMSWire

Taking the guesswork out of dental care with artificial intelligence – MIT News

When you picture a hospital radiologist, you might think of a specialist who sits in a dark room and spends hours poring over X-rays to make diagnoses. Contrast that with your dentist, who in addition to interpreting X-rays must also perform surgery, manage staff, communicate with patients, and run their business. When dentists analyze X-rays, they do so in bright rooms and on computers that arent specialized for radiology, often with the patient sitting right next to them.

Is it any wonder, then, that dentists given the same X-ray might propose different treatments?

Dentists are doing a great job given all the things they have to deal with, says Wardah Inam SM 13, PhD 16.

Inam is the co-founder of Overjet, a company using artificial intelligence to analyze and annotate X-rays for dentists and insurance providers. Overjet seeks to take the subjectivity out of X-ray interpretations to improve patient care.

Its about moving toward more precision medicine, where we have the right treatments at the right time, says Inam, who co-founded the company with Alexander Jelicich 13. Thats where technology can help. Once we quantify the disease, we can make it very easy to recommend the right treatment.

Overjet has been cleared by the Food and Drug Administration to detect and outline cavities and to quantify bone levels to aid in the diagnosis of periodontal disease, a common but preventable gum infection that causes the jawbone and other tissues supporting the teeth to deteriorate.

In addition to helping dentists detect and treat diseases, Overjets software is also designed to help dentists show patients the problems theyre seeing and explain why theyre recommending certain treatments.

The company has already analyzed tens of millions of X-rays, is used by dental practices nationwide, and is currently working with insurance companies that represent more than 75 million patients in the U.S. Inam is hoping the data Overjet is analyzing can be used to further streamline operations while improving care for patients.

Our mission at Overjet is to improve oral health by creating a future that is clinically precise, efficient, and patient-centric, says Inam.

Its been a whirlwind journey for Inam, who knew nothing about the dental industry until a bad experience piqued her interest in 2018.

Getting to the root of the problem

Inam came to MIT in 2010, first for her masters and then her PhD in electrical engineering and computer science, and says she caught the bug for entrepreneurship early on.

For me, MIT was a sandbox where you could learn different things and find out what you like and what you don't like, Inam says. Plus, if you are curious about a problem, you can really dive into it.

While taking entrepreneurship classes at the Sloan School of Management, Inam eventually started a number of new ventures with classmates.

I didn't know I wanted to start a company when I came to MIT, Inam says. I knew I wanted to solve important problems. I went through this journey of deciding between academia and industry, but I like to see things happen faster and I like to make an impact in my lifetime, and that's what drew me to entrepreneurship.

During her postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL), Inam and a group of researchers applied machine learning to wireless signals to create biomedical sensors that could track a persons movements, detect falls, and monitor respiratory rate.

She didnt get interested in dentistry until after leaving MIT, when she changed dentists and received an entirely new treatment plan. Confused by the change, she asked for her X-rays and asked other dentists to have a look, only to receive still another variation in diagnosis and treatment recommendations.

At that point, Inam decided to dive into dentistry for herself, reading books on the subject, watching YouTube videos, and eventually interviewing dentists. Before she knew it, she was spending more time learning about dentistry than she was at her job.

The same week Inam quit her job, she learned about MITs Hacking Medicine competition and decided to participate. Thats where she started building her team and getting connections. Overjets first funding came from the Media Lab-affiliated investment group the E14 Fund.

The E14 fund wrote the first check, and I don't think we would've existed if it wasn't for them taking a chance on us, she says.

Inam learned that a big reason for variation in treatment recommendations among dentists is the sheer number of potential treatment options for each disease. A cavity, for instance, can be treated with a filling, a crown, a root canal, a bridge, and more.

When it comes to periodontal disease, dentists must make millimeter-level assessments to determine disease severity and progression. The extent and progression of the disease determines the best treatment.

I felt technology could play a big role in not only enhancing the diagnosis but also to communicate with the patients more effectively so they understand and don't have to go through the confusing process I did of wondering who's right, Inam says.

Overjet began as a tool to help insurance companies streamline dental claims before the company began integrating its tool directly into dentists offices. Every day, some of the largest dental organizations nationwide are using Overjet, including Guardian Insurance, Delta Dental, Dental Care Alliance, and Jefferson Dental and Orthodontics.

Today, as a dental X-ray is imported into a computer, Overjets software analyzes and annotates the images automatically. By the time the image appears on the computer screen, it has information on the type of X-ray taken, how a tooth may be impacted, the exact level of bone loss with color overlays, the location and severity of cavities, and more.

The analysis gives dentists more information to talk to patients about treatment options.

Now the dentist or hygienist just has to synthesize that information, and they use the software to communicate with you, Inam says. So, they'll show you the X-rays with Overjet's annotations and say, 'You have 4 millimeters of bone loss, it's in red, that's higher than the 3 millimeters you had last time you came, so I'm recommending this treatment.

Overjet also incorporates historical information about each patient, tracking bone loss on every tooth and helping dentists detect cases where disease is progressing more quickly.

Weve seen cases where a cancer patient with dry mouth goes from nothing to something extremely bad in six months between visits, so those patients should probably come to the dentist more often, Inam says. Its all about using data to change how we practice care, think about plans, and offer services to different types of patients.

The operating system of dentistry

Overjets FDA clearances account for two highly prevalent diseases. They also put the company in a position to conduct industry-level analysis and help dental practices compare themselves to peers.

We use the same tech to help practices understand clinical performance and improve operations, Inam says. We can look at every patient at every practice and identify how practices can use the software to improve the care they're providing.

Moving forward, Inam sees Overjet playing an integral role in virtually every aspect of dental operations.

These radiographs have been digitized for a while, but they've never been utilized because the computers couldn't read them, Inam says. Overjet is turning unstructured data into data that we can analyze. Right now, we're building the basic infrastructure. Eventually we want to grow the platform to improve any service the practice can provide, basically becoming the operating system of the practice to help providers do their job more effectively.

Read more:
Taking the guesswork out of dental care with artificial intelligence - MIT News

How artificial intelligence is boosting crop yield to feed the world – Freethink

Over the last several decades, genetic research has seen incredible advances in gene sequencing technologies. In 2004, scientists completed the Human Genome Project, an ambitious project to sequence the human genome, which cost $3 billion and took 10 years. Now, a person can get their genome sequenced for less than $1,000 and within about 24 hours.

Scientists capitalized on these advances by sequencing everything from the elusive giant squid to the Ethiopian eggplant. With this technology came promises of miraculous breakthroughs: all diseases would be cured and world hunger would be a thing of the past.

So, where are these miracles?

We need about 60 to 70% more food production by 2050.

In 2015, a group of researchers founded Yield10 Bioscience, an agriculture biotech company that aimed to use artificial intelligence to start making those promises into reality.

Two things drove the development of Yield10 Bioscience.

One, obviously, [the need for] global food security: we need about 60 to 70% more food production by 2050, explained Dr. Oliver Peoples, CEO of Yield10 Bioscience, in an interview with Freethink. And then, of course, CRISPR.

It turns out that having the tools to sequence DNA is only step one of manufacturing the miracles we were promised.

The second step is figuring out what a sequence of DNA actually does. In other words, its one thing to discover a gene, and it is another thing entirely to discover a genes role in a specific organism.

In order to do this, scientists manipulate the gene: delete it from an organism and see what functions are lost, or add it to an organism and see what is gained. During the early genetics revolution, although scientists had tools to easily and accurately sequence DNA, their tools to manipulate DNA were labor-intensive and cumbersome.

Its one thing to discover a gene, and it is another thing entirely to discover a genes role in a specific organism.

Around 2012, CRISPR technology burst onto the scene, and it changed everything. Scientists had been investigating CRISPR a system that evolved in bacteria to fight off viruses since the 80s, but it took 30 years for them to finally understand how they could use it to edit genes in any organism.

Suddenly, scientists had a powerful tool that could easily manipulate genomes. Equipped with DNA sequencing and editing tools, scientists could complete studies that once took years or even decades in mere months.

Promises of miracles poured back in, with renewed vigor: CRISPR would eliminate genetic disorders and feed the world! But of course, there is yet another step: figuring out which genes to edit.

Over the last couple of decades, researchers have compiled databases of millions of genes. For example, GenBank, the National Institute of Healths (NIH) genetic sequence database, contains 38,086,233 genes, of which only tens of thousands have some functional information.

For example, ARGOS is a gene involved in plant growth. Consequently, it is a very well-studied gene. Scientists found that genetically engineering Arabidopsis, a fast-growing plant commonly used to study plant biology, to express lots of ARGOS made the plant grow faster.

Dozens of other plants have ARGOS (or at least genes very similar to it), such as pineapple, radish, and winter squash. Those plants, however, are hard to genetically manipulate compared to Arabidopsis. Thus, ARGOSs function in crops in general hasnt been as well studied.

The big crop companies are struggling to figure out what to do with CRISPR.

CRISPR suddenly changed the landscape for small groups of researchers hoping to innovate in agriculture. It was an affordable technology that anyone could use but no one knew what to do with it. Even the largest research corporations in the world dont have the resources to test all the genes that have been identified.

I think if you talk to all the big crop companies, theyve all got big investments in CRISPR. And I think theyre all struggling with the same question, which is, This is a great tool. What do I do with it? said Dr. Peoples.

The algorithm can identify genes that act at a fundamental level in crop metabolism.

The holy grail of crop science, according to Dr. Peoples, would be a tool that could identify three or four genetic changes that would double crop production for whatever youre growing.

With CRISPR, those changes could be made right now. However, there needs to be a way to identify those changes, and that information is buried in the massive databases.

To develop the tool that can dig them out, Dr. Peoples team merged artificial intelligence with synthetic biology, a field of science that involves redesigning organisms to have useful new abilities, such as increasing crop yield or bioplastic production.

This union created Gene Ranking Artificial Intelligence Network (GRAIN), an algorithm that evaluates scientific databases like GenBank and identifies genes that act at a fundamental level in crop metabolism.

That fundamental level aspect is one of the keys to GRAINs long-term success. It identifies genes that are common across multiple crop types, so when a powerful gene is identified, it can be used across multiple crop types.

For example, using the GRAIN platform, Dr. Peoples and his team identified four genes that may significantly impact seed oil content in Camelina, a plant similar to rapeseed (true canola oil). When the researchers increased the activity of just one of those genes via CRISPR, the plants had a 10% increase in seed oil content.

Its not quite a miracle yet, but with more advances in gene editing and AI happening all the time, the promises of the genetic revolution are finally starting to pay off.

Wed love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us attips@freethink.com.

Read more:
How artificial intelligence is boosting crop yield to feed the world - Freethink