Why cracking nuclear fusion will depend on artificial intelligence – New Scientist

The promise of clean, green nuclear fusion has been touted for decades, but the rise of AI means the challenges could finally be overcome

By Abigail Beall

THE big joke about sustainable nuclear fusion is that it has always been 30 years away. Like any joke, it contains a kernel of truth. The dream of harnessing the reaction that powers the sun was big news in the 1950s, just around the corner in the 1980s, and the hottest bet of the past decade.

But time is running out. Our demand for energy is burning up the planet, depleting its resources and risking damaging Earth beyond repair. Wind, solar and tidal energy provide some relief, but they are limited and unpredictable. Nuclear fission comes with the dangers of reactor meltdowns and radioactive waste, while hydropower can be ecologically disruptive. Fusion, on the other hand, could provide almost limitless energy without releasing carbon dioxide or producing radioactive waste. It is the dream power source. The perennial question is: can we make it a reality?

Perhaps now, finally, we can. That isnt just because of the myriad fusion start-ups increasingly sensing a lucrative market opportunity just around the corner and challenging the primacy of the traditional big-beast projects. Or just because of innovative approaches, materials and technologies that are fuelling an optimism that we can at last master fusions fiendish complexities. It is also because of the entrance of a new player, one that could change the rules of the game: artificial intelligence. In the right hands, it might make the next 30 years fly by.

Nuclear fusion is the most widespread source of energy in the universe, and one of the most efficient: just a few grams of fuel release the same energy as

View original post here:

Why cracking nuclear fusion will depend on artificial intelligence - New Scientist

Decoding the Future Trajectory of Healthcare with AI – ReadWrite

Artificial Intelligence (AI) is getting increasingly sophisticated day by day in its application, with enhanced efficiency and speed at a lower cost. Every single sector has been reaping benefits from AI in recent times. The Healthcare industry is no exception. Here is decoding the future trajectory of healthcare with AI.

The impact of artificial intelligence in the healthcare industry through machine learning (ML) and natural language processing (NLP) is transforming care delivery. Additionally, patients are expected to gain relatively high access to their health-related information than before through various applications such as smart wearable devices and mobile electronic medical records (EMR).

The personalized healthcare will authorize patients to take the wheel of their well-being, facilitate high-end healthcare, and promote better patient-provider communication to underprivileged areas.

For instance, IBM Watson for Health is helping healthcare organizations to apply cognitive technology to provide a vast amount of power diagnosis and health-related information.

In addition, Googles DeepMind Health is collaborating with researchers, clinicians, and patients in order to solve real-world healthcare problems. Additionally, the company has combined systems neuroscience with machine learning to develop strong general-purpose learning algorithms within neural networks to mimic the human brain.

Companies are working towards developing AI technology to solve several existing challenges, especially within the healthcare space. Strong focus on funding and starting AI healthcare programs played a significant role in Microsoft Corporations decision to launch a 5-year, US$ 40 million program known as AI for Health in January 2019.

The Microsoft program will use artificial intelligence tools to resolve some of the greatest healthcare challenges including global health crises, treatment, and disease diagnosis. Microsoft has also ensured that academia, non-profit, and research organizations have access to this technology, technical experts, and resources to leverage AI for care delivery and research.

In January 2020, these factors influenced Takeda Pharmaceuticals Company and MITs School of Engineering to join hands for three years to drive innovation and application of AI in the healthcare industry and drug development.

AI applications are only centered on three main investment areas: Diagnostics, Engagement, and Digitization. With the rapid advancement in technologies. There are exciting breakthroughs in incorporating AI in medical services.

The most interesting aspect of AI is robots. Robots are not only replacing trained medical staff but also making them more efficient in several areas. Robots help in controlling the cost while potentially providing better care and performing accurate surgery in limited space.

China and the U.S. have started investing in the development of robots to support doctors. In November 2017, a robot in China passed a medical licensing exam using only an AI brain. Also, it was the first-ever semi-automated operating robot that was used to suture blood vessels as fine as 0.03 mm.

In order to prevent coronavirus from spreading, the American doctors are relying on a robot that can measure the patients act and vitals. In addition, robots are also being used for recovery and consulting assistance and transporting units. These robots are showcasing significant potential in revolutionizing medical procedures in the future.

Precision medicine is an emerging approach to disease prevention and treatment. The precision medication approach allows researchers and doctors to predict more accurate treatment and prevention strategies.

The advent of precision medicine technology has allowed healthcare to actively track patients physiology in real-time, take multi-dimensional data, and create predictive algorithms that use collective learnings to calculate individual outcomes.

In recent years, there has been an immense focus on enabling direct-to-consumer genomics. Now, companies are aiming to create patient-centric products within digitization processes and genomics related to ordering complex testing in clinics.

In January 2020, ixLayer, a start-up based in San-Francisco, launched one of its kind precision health testing platforms to enhance the delivery of diagnostic testing and to shorten the complex relationship among physicians, precision health tests, and patients.

Personal health monitoring is a promising example of AI in healthcare. With the emergence of advanced AI and Internet of Medical Things (IoMT), demand for consumer-oriented products such as smart wearables for monitoring well-being is growing significantly.

Owing to the rapid proliferation of smart wearables and mobile apps, enterprises are introducing varied options to monitor personal health.

In October 2019, Gali Health, a health technology company, introduced its Gali AI-powered personal health assistant for people suffering from inflammatory bowel diseases (IBD). It offers health tracking and analytical tools, medically-vetted educational resources, and emotional support to the IBD community.

Similarly, start-ups are also coming forward with innovative devices integrated with state-of-the-art AI technology to contribute to the growing demand for personal health monitoring.

In recent years, AI has been used in numerous ways to support the medical imaging of all kinds. At present, the biggest use for AI is to assist in the analysis of images and perform single narrow recognition tasks.

In the United States, AI is considered highly valuable in enhancing business operations and patients care. It has the greatest impact on patient care by improving the accuracy of clinical outcomes and medical diagnosis.

Strong presence of leading market players in the country is bolstering the demand for medical imaging in hospitals and research centers.

In January 2020, Hitachi Healthcare Americas announced to start a new dedicated R&D center in North America. Medical imaging will leverage the advancements in machine learning and artificial intelligence to bring about next-gen of medical imaging technology.

With a plethora of issues driven by the growing rate of chronic disease and the aging population, the need for new innovative solutions in the healthcare industry is moving on an upswing.

Unleashing AIs complete potential in the healthcare industry is not an easy task. Both healthcare providers and AI developers together will have to tackle all the obstacles on the path towards the integration of new technologies.

Clearing all the hurdles will need a compounding of technological refinement and shifting mindsets. As AI trend become more deep-rooted, it is giving rise to highly ubiquitous discussions. Will AI replace the doctors and medical professionals, especially radiologists and physicians? The answer to this is, it will increase the efficiency of the medical professionals.

Initiatives by IBM Watson and Googles DeepMind will soon unlock the critical answers. However, AI aims to mimic the human brain in healthcare, human judgment, and intuitions that cannot be substituted.

Even though AI is augmenting in existing capabilities of the industry, it is unlikely to fully replace human intervention. AI skilled forces will swap only those who dont want to embrace technology.

Healthcare is a dynamic industry with significant opportunities. However, uncertainty, cost concerns, and complexity are making it an unnerving one.

The best opportunity for healthcare in the near future are hybrid models, where clinicians and physicians will be supported for treatment planning, diagnosis, and identifying risk factors. Also, with an increase in the number of geriatric population and the rise of health-related concerns across the globe, the overall burden of disease management has augmented.

Patients are also expecting better treatment and care. Due to growing innovations in the healthcare industry with respect to improved diagnosis and treatment, AI has gained consideration among the patients and doctors.

In order to develop better medical technology, entrepreneurs, healthcare service providers, investors, policy developers, and patients are coming together.

These factors are set to exhibit a brighter future of AI in the healthcare industry. It is extremely likely that there will be widespread use and massive advancements of AI integrated technology in the next few years. Moreover, healthcare providers are expected to invest in adequate IT infrastructure solutions and data centers to support new technological development.

Healthcare companies should continually integrate new technologies to build strong value and to keep the patients attention.

-

The insights presented in the article are based on a recent research study on Global Artificial Intelligence In Healthcare Market by Future Market Insights.

Abhishek Budholiya is a tech blogger, digital marketing pro, and has contributed to numerous tech magazines. Currently, as a technology and digital branding consultant, he offers his analysis on the tech market research landscape. His forte is analysing the commercial viability of a new breakthrough, a trait you can see in his writing. When he is not ruminating about the tech world, he can be found playing table tennis or hanging out with his friends.

See more here:

Decoding the Future Trajectory of Healthcare with AI - ReadWrite

Google’s AI subsidiary turns to blockchain technology to track UK health data – The Verge

Forays by Google subsidiary DeepMind Health into the UKs medical institutions have been characterized by two major themes. First, amazing results powered by cutting-edge AI; and second, a lack of transparency over the handling of the UKs public-funded data. With the science going swimmingly, DeepMind Health is focusing more than ever on reassuring UK citizens that their medical records are in safe hands. Its latest plan is a public ledger that shows what data its using, when, and for which purposes.

The initiative is called the Verifiable Data Audit, and was announced this week in a blogpost written by DeepMind co-founder Mustafa Suleyman, and the companys head of security and transparency, Ben Laurie. The Audit technology is not yet in place, but would keep a publicly accessible record of every time DeepMind accesses hospital data using technology thats related to the blockchain.

Each time theres any interaction with data, well begin to add an entry to a special digital ledger, write Suleyman and Laurie. That entry will record the fact that a particular piece of data has been used, and also the reason why for example, that blood test data was checked against the NHS national algorithm to detect possible acute kidney injury.

Like blockchain technologies, this information will be write-only it can be edited after the fact or deleted. It will also make use of cryptographic proofs that will allow any experts to verify the integrity of the data. Unlike most blockchain systems, though, the ledger wont be distributed among the public, but stored by a number of entities including data processors like DeepMind Health, and health care providers. The company says this wont impede the verification process, and that the choice was made to make the ledger more efficient. Blockchain entities (including Bitcoin) that are distributed among multiple players take a lot of power to compile and check as much as a small country, according to some estimates.

Speaking to The Guardian, Nicola Perrin of the Wellcome Trust, said the technology should create a robust audit trail for public health data managed by DeepMind. One of the main criticisms about DeepMinds collaboration with the Royal Free [Hospital Trust] was the difficulty of distinguishing between uses of data for care and for research, said Perrin. This type of approach could help address that challenge, and suggests they are trying to respond to the concerns. DeepMind Health says it wants implement the first pieces of the audit later this year.

Originally posted here:

Google's AI subsidiary turns to blockchain technology to track UK health data - The Verge

AI startup investment is on pace for a record year – TechCrunch

The startup investing market is crowded, expensive and rapid-fire today as venture capitalists work to preempt one another, hoping to deploy funds into hot companies before their competitors. The AI startup market may be even hotter than the average technology niche.

This should not surprise.

In the wake of the Microsoft-Nuance deal, The Exchange reported that it would be reasonable to anticipate an even more active and competitive market for AI-powered startups. Our thesis was that after Redmond dropped nearly $20 billion for the AI company, investors would have a fresh incentive to invest in upstarts with an AI focus or strong AI component; exits, especially large transactions, have a way of spurring investor interest in related companies.

That expectation is coming true. Investors The Exchange reached out to in recent days reported a fierce market for AI startups.

The Exchange explores startups, markets and money.

Read it every morning on Extra Crunch or get The Exchange newsletter every Saturday.

But dont presume that investors are simply falling over one another to fund companies betting on a future that may or may not arrive. Per a Signal AI survey of 1,000 C-level executives, nearly 92% thought that companies should lean on AI to improve their decision-making processes. And 79% of respondents said that companies are already doing so.

The gap between the two numbers implies that there is space in the market for more corporations to learn to lean on AI-powered software solutions, while the first metric belies a huge total addressable market for startups constructing software built on a foundation of artificial intelligence.

Now deep in the second quarter, were diving back into the AI startup market this morning, leaning on notes from Blumberg Capitals David Blumberg, Glasswing Ventures Rudina Seseri, Atomicos Ben Blumeand Jocelyn Goldfein of Zetta Venture Partners. Well start by looking at recent venture capital data regarding AI startups and dig into what VCs are seeing in both the U.S. and European markets before chatting about applied AI versus core AI and in which context VCs might still care about the latter.

The exit market for AI startups is more than just the big Microsoft-Nuance deal. CB Insights reports that four of the largest five American tech companies have bought a dozen or more AI-focused startups to date, with Apple leading the pack with 29 such transactions.

See the original post here:

AI startup investment is on pace for a record year - TechCrunch

AI is learning when it should and shouldnt defer to a human – MIT Technology Review

The context: Studies show that when people and AI systems work together, they can outperform either one acting alone. Medical diagnostic systems are often checked over by human doctors, and content moderation systems filter what they can before requiring human assistance. But algorithms are rarely designed to optimize for this AI-to-human handover. If they were, the AI system would only defer to its human counterpart if the person could actually make a better decision.

The research: Researchers at MITs Computer Science and AI Laboratory (CSAIL) have now developed an AI system to do this kind of optimization based on strengths and weaknesses of the human collaborator. It uses two separate machine-learning models; one makes the actual decision, whether thats diagnosing a patient or removing a social media post, and one predicts whether the AI or human is the better decision maker.

The latter model, which the researchers call the rejector, iteratively improves its predictions based on each decision makers track record over time. It can also take into account factors beyond performance, including a persons time constraints or a doctors access to sensitive patient information not available to the AI system.

See the original post here:

AI is learning when it should and shouldnt defer to a human - MIT Technology Review

Google DeepMind Shows That AI Can Have Killer Instincts – Futurism

Red and Blue

Concerns over artificial intelligence (AI) have been around for some time now, and thanks to a new study by Googles DeepMind research lab, it seems that this Terminator-esque future of intelligent machines may not be that farfetched.

Using games, a platform that Googles DeepMind AI is terribly familiar with, researchers have been testing whether neural networks are more likely to cooperate or compete, and if these AI are capable of understanding motivations behind making that choice.

For the research, they used two games with similar scenarios for two AI agents, red and blue.

In the first game, the agents were tasked with trying to gather the most apples (green) in a basic 2D graphical environment. The agents were given the option to tag one another with a laser blast that temporarily removed them from the game. After running the scenario a thousand times, they realized that the agents were willing to cooperate when the apples were abundant, but they turned on each other when the stakes were higher.

The researchers realized that, in a smaller network, the agents were more likely to cooperate. Whereas in a larger, more complex network, the AI were quicker to sabotage one another.

In the second scenario, a game called Wolfpack, the agents played as wolves that were tasked with capturing a prey.When the wolves are close in proximity during a successful capture, the rewards offered were greater. Instead of going all lone wolf, this incentivized the agents to work together.

In a larger network, the agents were quicker to understand that cooperation was the way to go.

The Google researchers hope that the study can lead to AI being better at working with other AI in situations with imperfect information. As such, the most practical application of this research, in the short term, is to be able to better understand and control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet all of which depend on our continued cooperation,the study says.

At the very least, the study shows that AI are capable of working together and that AI can make selfish decisions.

Joel Leibo, who was the lead author of the paper, outlines the next steps in an interview withBloomberg, Going forward it would be interesting to equip agents with the ability to reason about other agents beliefs and goals.

Original post:

Google DeepMind Shows That AI Can Have Killer Instincts - Futurism

As AI startups focus on time-to-market, ethical considerations should be the priority – SmartCompany.com.au

A girl making friends with a robot at Kuromon Market in Osaka. Source: Andy Kelly/Unsplash.

Artificial intelligence (AI) has clearly emerged as one of the most transformational technologies of our age, with AI already prevalent in our everyday lives. Among many fascinating uses, AI has helped explore the universe, tackle complex and chronic diseases, formulate new medicines, and alleviate poverty.

As AI becomes more widespread over the next decade, like many, I believe we will see more innovative and creative uses.

Indeed, 93% of respondents in anISACAs Next Decade of Tech: Envisioning the 2020s study believe the augmented workforce (or people, robots and AI working closely together) will reshape how some or most jobs are performed in the 2020s.

The rise of social robots to assist patients with physical disabilities, manage elderly care and even educate our children are just some of the many uses being explored.

As AI continues to redefine humanity in various ways, ethical consideration is of paramount importance, which as Australians, we should be addressing in government and business. ISACAs research highlights the double-edged nature of this budding technology.

Only 39% of respondents in Australia believe that enterprises will give ethical considerations around AI and machine learning sufficient attention in the next decade to prevent potentially serious unintended consequences in their deployments. Respondents specifically pinpointed malicious AI attacks involving critical infrastructure, social engineering and autonomous weapons as their primary fears.

These concerns are quite disturbing, although not alarming, due to long-sounded early warnings about these risks.

For instance, in February 2018, prominent researchers and academics published a report about the increasing possibilities that rogue states, criminals, terrorists and other malefactors could soon exploit AI capabilities to cause widespread harm.

And in 2017, the late physicist Stephen Hawking cautioned that the emergence of AI could be the worst event in the history of our civilization unless society finds a way to control its development.

To date, no industry standards exist to guide the secure development and maintenance of AI systems.

Further exacerbating this lack of standards is the fact that startup firms still dominate the AI market. An MIT report revealed that, other than a few large players such as IBM and Palantir Technologies, AI remains a market of 2,600 startups. The majority of these startups are primarily focused on rapid time to market, product functionality and high return on investments. Embedding cyber resilience into their products is not a priority.

Malicious AI programs have surfaced much quicker than many pundits had anticipated. A case in point is the proliferation of deep fakes, ostensibly realistic audio or video files generated by deep learning algorithms or neural networks toperpetratea range of malevolent acts, such as faking celebrity pornographic videos, revenge porn, fake news, financial fraud, and wide range of other disinformation tactics.

Several factors underpinned the rise of deep fakes, but a few stand out.

First is the exponential increase of computing power combined with the availability of large image databases. Second, and probably the most vexing, is the absence of coherent efforts to institute global laws to curtail the development of malicious AI programs. Third, social media platforms, which are being exploited to disseminate deep fakes at scale, are struggling to keep up with the rapidly maturing and evasive threat.

Unsurprisingly, deep fake videos published online have doubled in the past nine months to almost 15,000 cases, according to DeepTrace, a Netherlands-based cyber security group.

Its clear that addressing this growing threat will prove complex and expensive, but the task is pressing.

The ACCC Digital Platforms Inquiryreport highlighted the risk of consumers being exposed to serious incidents of disinformation. Emphasising the gravity of the risk is certainly a step in the right direction, but more remains to be done.

Currently,there is no consensus globally on whether the development of AI requires its own dedicated regulator or specific statutory regime.

Ironically, the role of the auditor and IT auditor is a function that AI is touted as being able to eliminate. This premise would make for a good Hollywood script the very thing requiring ethical consideration and regulation, becomes the regulator.

Government, enterprises and startups need to be mindful of the key risks that are inherent in AI adoption, conduct appropriate oversight, and develop principles and regulation that articulate the roles that can be partially or fully automated today to secure the future of humanity and business.

Until then, AI companies need to embed protocols and cyber security into their inventions to prevent malicious use.

NOW READ:Expert warns artificial intelligence will have a huge impact on small businesses but wont take your job just yet

NOW READ:Why artificial intelligence in Australia needs to get ethical

Read the rest here:

As AI startups focus on time-to-market, ethical considerations should be the priority - SmartCompany.com.au

AI file extension – Open, view and convert . ai files

The ai file extension is associated with Adobe Illustrator the well known vector graphics editor for the Macintosh and Windows platforms.

AI file format is a widely used format for the exchange of 2D objects. Basic files in this format are simple to write, but files created by applications implementing the full AI specification can be quite large and complex and may be too slow to render.

Simple *.ai files are easy to construct, and a program can create files that can be read by any AI reader or can be printed on any PostScript printer software. Reading AI files is another matter entirely. Certain operations may be very difficult for a rendering application to implement or simulate. In light of this, developers often choose not to render the image from the PostScript-subset line data in the file. However almost all of the image can usually be reconstructed using simple operations.implementation of the PostScript language.

The *.ai files consist of a series of ASCII lines, which may be comments, data, commands, or combinations of commands and data. This data is based on the PDF language specification and older versions of Adobe Illustrator used format which is variant of Adobe Encapsulated PostScirpt (EPS) format.

If The EPS is a slightly limited subset of full PostScript, then Adobe Illustrator AI format is a strictly limited, highly simplified subset of EPS. While EPS can contain virtually any PS command that's not on the verboten list and can include elaborate program flow logic that determines what gets printed when, an AI file is limited to a much smaller number of drawing commands and it contains no programming logic at all. For all practical purposes, each unit of "code" in an AI file represents a drawing object. The program importing the AI reads each object in sequence, start to finish, no detours, no logical side-trips.

MIME: application/postscript

Read the rest here:

AI file extension - Open, view and convert . ai files

Mendel.ai nabs $2 million to match cancer patients with the latest … – TechCrunch

Dr. Karim Galil was tired. He was tired of losing patients to cancer. He was tired of messy medical records. And he was tired of trying to stay on top of the avalanche of clinical trials touting one solution or another. Losing both patience and too many patients, Galil decided to create an organized and artificially intelligent system to match those under his care with thebest diagnostic and treatment methods available.

He called his new system Mendel.ai after Gregor Mendel, the father of modern genetics science, and has just raised $2 million in seed funding from DCM Ventures, Bootstrap Labs and Launch Capitalto get the project off the ground.

Mendel.ai is similar in many ways to the U.K.-based BenevolentBio, which is focused on skimming through scientific papers to find the latest in cutting-edge medical research. But rather than using keyword data, Mendal.ai uses analgorithm that understands the unstructured, natural language content within medical documents pulled from clinicaltrials.gov,and then compares it to a patients medical record. The search process returns a fully personalized match and evaluates the patients eligibility for each suggested treatment within minutes, according to Galil.

The startup could prove useful for doctors whoincreasingly find it difficult to keep up on the exhaustive amount of clinical data.

Patients are also overwhelmed at the prospect of combing through mountains of clinical trial research. A lung cancer patient, for example, might find 500 potential trials on clinicaltrials.gov, each of which has a unique, exhaustive list of eligibility criteria that must be read and assessed, says Galil. As this pool of trials changes each week, it is humanly impossible to keep track of all good matches.

Mendel.ai seeks to reduce the time it takes and thus save more lives. The company is now integrating with the Comprehensive Blood & Cancer Center (CBCC) in Bakersfield, Calif, which will allow the centers doctors to quickly match their patients with available clinical trials in a matter of minutes, according to Galil.

The plan going forward is to workwith hospitals and cancer genomics companies like the CBCC to improve Mendel.ai and introduce the system. A more immediate goal, Galil says, would be challenging IBMs Watson against his system to see which one can match up the patients better.

This is the difference between someone dying and someone living. Its not a joke, Galil told TechCrunch.

See the original post:

Mendel.ai nabs $2 million to match cancer patients with the latest ... - TechCrunch

What you need to know about data fluency and federated AI – Healthcare IT News

Sharecare is a digital health company that offers an artificial intelligence-powered mobile app for consumers. But it has a strong viewpoint on AI and how it is used.

Sharecare believes that while other companies use augmented analytics and AI to understand data with business intelligence tools, they are missing out on the benefits of data fluency and federated AI. By using federated AI and data fluency, Sharecare says it digs deeper to find hidden similarities in the data that business intelligence tools would not be able to detect in health settings.

To gain a deeper understanding of data fluency and federated AI, Healthcare IT News sat down with Akshay Sharma, executive vice president of artificial intelligence at Sharecare, for an in-depth interview.

Q: What exactly is federated AI, and how is it different from any other form of AI?

A: Federated AI, or federated learning, guarantees that the user's data stays on the device. For example, the applications that run specific programs on the edge of the network can still learn how to process the data and build better, more efficient models by sharing a mathematical representation of key clinical features, not the data.

Traditional machine learning requires centralizing data to train and build a model. However, with edge AI and federated learning combined with other privacy-preserving techniques and zero trust infrastructure, it's possible to build models in a distributed data setup while lowering the risk of any single point of attack.

The application of federated learning also applies in cloud settings where the data doesn't have to leave the systems on which it exists but can allow for learning. We call this federated cloud learning, which organizations can use to collaborate, keeping the data private.

Q: What is data fluency, and why is it important to AI?

A: Data fluency is a framework and set of tools to rapidly unlock the value of clinical data by having every key stakeholder participate simultaneously in a collaborative environment. A machine learning environment with a data fluency framework engages clinicians, actuaries, data engineers, data scientists, managers, infrastructure engineers and all other business stakeholders to explore the data, ask questions, quickly build analytics and even model the data.

This novel approach to enterprise data analytics is purpose-built for healthcare to improve workflows, collaboration and rapid prototyping of ideas before spending time and money on building models.

Q: How do data fluency platforms enable analysts, engineers, data scientists and clinicians to collaborate more easily and efficiently?

A: Traditional healthcare systems are very siloed, and many organizations struggle to discover the value within their data and unlock actionable trends and clinical insights. Not only are data creation systems and teams isolated from data transformation systems and teams, but engineers and data scientists use coding languages while clinicians and finance teams use Word or Excel.

The disconnect creates a situation where the data knowledge is translated outside of the programming environment. The transformations between system boundaries are lossy and without feedback loops to improve an algorithm or the code. Yet, all stakeholders need early and iterative access to the data to build health algorithms effectively and with greater transparency.

The modern healthcare stack facilitates the collaboration of cross-functional teams from a single, data-driven point of view in Python Notebooks with a UI for non-engineering partners. Building AI models can be time-consuming and expensive to build, and it is essential to hedge your bets by getting early prototype input across domains of expertise.

Data fluency provides an environment for critical stakeholders to discover the value on top of the data or insights and in a real-time, agile and iterative way. The feedback from non-engineering teams is immediate and can help improve the underlying model or code in the notebook instantaneously.

Each domain expert can have multiple data views that facilitate deep collaboration and data insight discovery, enabling the continuous learning environment from care to research and from research to care. Data fluency works with cloud-native architectures, and many of the techniques can also automatically extend to computing on edge, where the patient and their data reside.

Q: Why do you say the future of analytics in healthcare is federated AI and data fluency?

A: Traditional analytics in healthcare is rooted in understanding a given set of data by using business intelligence-focused tools. The employees using these tools are not typically engineers but analysts, statisticians and business users.

The problem with traditional enterprise data analytics is that you don't learn from data; you only understand what's in it. To learn from data, you have to bring machine learning into the equation and effective feedback loops from all relevant stakeholders.

Machine learning helps surface hidden patterns in the data, especially if there are non-linear relationships that aren't easily identifiable to humans. Proactive collaboration at the data layer provides transparency into how the models or analytics metrics are built and makes it easier to unravel bias or assumptions and correct them in real time.

Federated AI and data fluency also address the barriers to data acquisition, which are often not technological, but instead include privacy, trust, regulatory compliance and intellectual property. This is especially the case in healthcare, where patients and consumers expect privacy with respect to personal information and where organizations want to protect the value of their data and are also required to follow regulatory laws such as HIPAA in the United States and the GDPR [General Data Protection Regulation] in the Eurozone.

Access to healthcare data is extremely difficult and guarded behind compliance walls. Usually, at best, access is provided to de-identified data with several security measures. Federated AI and the principles of data fluency can share a model without sharing the data used to train it and address these concerns. It will play a critical role in understanding the insights within distributed data silos while navigating with compliance barriers.

The privacy-preserving approach to unlocking the value of health data is crucial to the future of healthcare. The point is to improve healthcare machine learning adoption and understandability to drive actionable insights and better health outcomes. Federated AI goes beyond traditional enterprise data analytics to create a machine learning environment for data fluency and explainability that enables the training of models in parallel from automated multi-omics pipelines.

Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Read more from the original source:

What you need to know about data fluency and federated AI - Healthcare IT News

Google’s anti-trolling AI can be defeated by typos, researchers find … – Ars Technica

Visit any news organization's website or any social media site, and you're bound to find some abusive or hateful language being thrown around. As those who moderate Ars' comments know, trying to keep a lid on trolling and abuse in comments can be an arduous and thankless task: whendone too heavily, it smacks of censorship and suppression of free speech; when applied too lightly, it can poison the community and keep people from sharing their thoughts out of fear of being targeted. And human-based moderation is time-consuming.

Both of these problems are the target of a project by Jigsaw, an Alphabet startup effort spun off from Google. Jigsaw's Perspective project is an application interface currently focused on moderating online conversationsusing machine learning to spot abusive, harassing, and toxic comments. The AI applies a "toxicity score" to comments, which can be used to either aide moderation or to reject comments outright, giving the commenter feedback about why their post was rejected. Jigsaw is currently partnering with Wikipedia and The New York Times, among others, to implement the Perspective API to assist in moderating reader-contributed content.

But that AI still needs some training, as researchers at the University of Washington's Network Security Lab recently demonstrated. In a paper published on February 27, Hossein Hosseini, Sreeram Kannan, Baosen Zhang, and Radha Poovendran demonstrated that they could fool the Perspective AI into giving a low toxicity score to comments that it would otherwise flag by simply misspelling key hot-button words (such as "iidiot") or inserting punctuation into the word ("i.diot" or "i d i o t," for example). By gaming the AI's parsing of text, they were able to get scores that would allow comments to pass a toxicity test that would normally be flagged as abusive.

"One type of the vulnerabilities of machine learning algorithms is that an adversary can change the algorithm output by subtly perturbing the input, often unnoticeable by humans," Hosseini and his co-authors wrote. "Such inputs are called adversarial examples, and have been shown to be effective against different machine learning algorithms even when the adversary has only a black-box access to the target model."

The researchers also found that Perspective would flag comments that were not abusive in nature but used keywords that the AI had been trained to see as abusive. The phrases "not stupid" or "not an idiot" scored nearly as high on Perspective's toxicity scale as comments that used "stupid" and "idiot."

These sorts of false positives, coupled with easy evasion of the algorithms by adversaries seeking to bypass screening, belie the basic problem with any sort of automated moderation and censorship. Update: CJ Adams,Jigsaw's product manager for Perspective, acknowledged the difficulty in a statement he sent to Ars:

It's great to see research like this. Online toxicity is a difficult problem, and Perspective was developed to support exploration of how ML can be used to help discussion. We welcome academic researchers to join our research efforts on Github and explore how we can collaborate together to identify shortcomings of existing models and find ways to improve them.

Perspective is still a very early-stage technology, and as these researchers rightly point out, it will only detect patterns that are similar to examples of toxicity it has seen before. We have more details on this challenge and others on the Conversation AI research page. The API allows users and researchers to submit corrections like these directly, which will then be used to improve the model and ensure it can to understand more forms of toxic language, and evolve as new forms emerge over time.

Visit link:

Google's anti-trolling AI can be defeated by typos, researchers find ... - Ars Technica

Google hopes to prevent robot uprising with new AI training technique – The Independent

Designed by Pierpaolo Lazzarini from Italian company Jet Capsule. The I.F.O. is fuelled by eight electric engines, which is able to push the flying object to an estimated top speed of about 120mph.

Jet Capsule/Cover Images

A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore

Getty Images

A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore

Getty Images

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty Images

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty Images

The giant human-like robot bears a striking resemblance to the military robots starring in the movie 'Avatar' and is claimed as a world first by its creators from a South Korean robotic company

Jung Yeon-Je/AFP/Getty Images

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty Images

Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi

Rex

Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi and Kaptain Rock playing one string light saber guitar perform jam session

Rex

A test line of a new energy suspension railway resembling the giant panda is seen in Chengdu, Sichuan Province, China

Reuters

A test line of a new energy suspension railway, resembling a giant panda, is seen in Chengdu, Sichuan Province, China

Reuters

A concept car by Trumpchi from GAC Group is shown at the International Automobile Exhibition in Guangzhou, China

Rex

A Mirai fuel cell vehicle by Toyota is displayed at the International Automobile Exhibition in Guangzhou, China

Reuters

A visitor tries a Nissan VR experience at the International Automobile Exhibition in Guangzhou, China

Reuters

A man looks at an exhibit entitled 'Mimus' a giant industrial robot which has been reprogrammed to interact with humans during a photocall at the new Design Museum in South Kensington, London

Getty

A new Israeli Da-Vinci unmanned aerial vehicle manufactured by Elbit Systems is displayed during the 4th International conference on Home Land Security and Cyber in the Israeli coastal city of Tel Aviv

Getty

Electrification Guru Dr. Wolfgang Ziebart talks about the electric Jaguar I-PACE concept SUV before it was unveiled before the Los Angeles Auto Show in Los Angeles, California, U.S

Reuters

The Jaguar I-PACE Concept car is the start of a new era for Jaguar. This is a production preview of the Jaguar I-PACE, which will be revealed next year and on the road in 2018

AP

Japan's On-Art Corp's CEO Kazuya Kanemaru poses with his company's eight metre tall dinosaur-shaped mechanical suit robot 'TRX03' and other robots during a demonstration in Tokyo, Japan

Reuters

Japan's On-Art Corp's eight metre tall dinosaur-shaped mechanical suit robot 'TRX03'

Reuters

Japan's On-Art Corp's eight metre tall dinosaur-shaped mechanical suit robot 'TRX03' performs during its unveiling in Tokyo, Japan

Reuters

Singulato Motors co-founder and CEO Shen Haiyin poses in his company's concept car Tigercar P0 at a workshop in Beijing, China

Reuters

The interior of Singulato Motors' concept car Tigercar P0 at a workshop in Beijing, China

Reuters

Singulato Motors' concept car Tigercar P0

Reuters

A picture shows Singulato Motors' concept car Tigercar P0 at a workshop in Beijing, China

Reuters

Connected company president Shigeki Tomoyama addresses a press briefing as he elaborates on Toyota's "connected strategy" in Tokyo. The Connected company is a part of seven Toyota in-house companies that was created in April 2016

Getty

A Toyota Motors employee demonstrates a smartphone app with the company's pocket plug-in hybrid (PHV) service on the cockpit of the latest Prius hybrid vehicle during Toyota's "connected strategy" press briefing in Tokyo

Getty

An exhibitor charges the battery cells of AnyWalker, an ultra-mobile chasis robot which is able to move in any kind of environment during Singapore International Robo Expo

Getty

A robot with a touch-screen information apps stroll down the pavillon at the Singapore International Robo Expo

Getty

An exhibitor demonstrates the AnyWalker, an ultra-mobile chasis robot which is able to move in any kind of environment during Singapore International Robo Expo

Getty

Robotic fishes swim in a water glass tank displayed at the Korea pavillon during Singapore International Robo Expo

Getty

An employee shows a Samsung Electronics' Gear S3 Classic during Korea Electronics Show 2016 in Seoul, South Korea

Reuters

Visitors experience Samsung Electronics' Gear VR during the Korea Electronics Grand Fair at an exhibition hall in Seoul, South Korea

Getty

Amy Rimmer, Research Engineer at Jaguar Land Rover, demonstrates the car manufacturer's Advanced Highway Assist in a Range Rover, which drives the vehicle, overtakes and can detect vehicles in the blind spot, during the first demonstrations of the UK Autodrive Project at HORIBA MIRA Proving Ground in Nuneaton, Warwickshire

PA wire

Chris Burbridge, Autonomous Driving Software Engineer for Tata Motors European Technical Centre, demonstrates the car manufacturer's GLOSA V2X functionality, which is connected to the traffic lights and shares information with the driver, during the first demonstrations of the UK Autodrive Project at HORIBA MIRA Proving Ground in Nuneaton, Warwickshire

PA wire

Ford EEBL Emergency Electronic Brake Lights is demonstrated during the first demonstrations of the UK Autodrive Project at HORIBA MIRA Proving Ground in Nuneaton, Warwickshire

PA

Full-scale model of 'Kibo' on display at the Space Dome exhibition hall of the Japan Aerospace Exploration Agency (JAXA) Tsukuba Space Center, in Tsukuba, north-east of Tokyo, Japan

EPA

Miniatures on display at the Space Dome exhibition hall of the Japan Aerospace Exploration Agency (JAXA) Tsukuba Space Center, in Tsukuba, north-east of Tokyo, Japan. In its facilities, JAXA develop satellites and analyse their observation data, train astronauts for utilization in the Japanese Experiment Module 'Kibo' of the International Space Station (ISS) and develop launch vehicles

EPA

The robot developed by Seed Solutions sings and dances to the music during the Japan Robot Week 2016 at Tokyo Big Sight. At this biennial event, the participating companies exhibit their latest service robotic technologies and components

Getty

The robot developed by Seed Solutions sings and dances to music during the Japan Robot Week 2016 at Tokyo Big Sight

Getty

Government and industry are working together on a robot-like autopilot system that could eliminate the need for a second human pilot in the cockpit

AP

Aurora Flight Sciences' technicians work on an Aircrew Labor In-Cockpit Automantion System (ALIAS) device in the firm's Centaur aircraft at Manassas Airport in Manassas, Va.

AP

Stefan Schwart and Udo Klingenberg preparing a self-built flight simulator to land at Hong Kong airport, from Rostock, Germany

EPA

Read the original here:

Google hopes to prevent robot uprising with new AI training technique - The Independent

Artificial intelligence called threat to humanity, compared to nuclear weapons: Report – Washington Times

Artificial intelligence is revolutionizing warfare and espionage in ways similar to the invention of nuclear arms and ultimately could destroy humanity, according to a new government-sponsored study.

Advances in artificial intelligence, or AI, and a subset called machine learning are occurring much faster than expected and will provide U.S. military and intelligence services with powerful new high-technology warfare and spying capabilities, says a report by two AI experts produced for Harvards Belfer Center.

The range of coming advanced AI weapons include: robot assassins, superfast cyber attack machines, driverless car bombs and swarms of small explosive kamikaze drones.

According to the report, Artificial Intelligence and National Security, AI will dramatically augment autonomous weapons and espionage capabilities and will represent a key aspect of future military power.

The report also offers an alarming warning that artificial intelligence could spin out of control: Speculative but plausible hypotheses suggest that General AI and especially superintelligence systems pose a potentially existential threat to humanity.

The 132-page report was written by Gregory C. Allen and Taniel Chan for the director of the Intelligence Advanced Research Projects Activity, (IARPA), the U.S. intelligence communitys research unit.

The study calls for policies designed to preserve American military and intelligence superiority, boost peaceful uses of AI, and address the dangers of accidental or adversarial attacks from automated systems.

The report predicts that AI will produce a revolution in both military and intelligence affairs comparable to the emergence of aircraft, noting unsuccessful diplomatic efforts in 1899 to ban the use of aircraft for military purposes.

The applications of AI to warfare and espionage are likely to be as irresistible as aircraft, the report says. Preventing expanded military use of AI is likely impossible.

Recent AI breakthroughs included a $35 computer that defeated a former Air Force pilot in an air combat simulator, and a South Korean program that beat a person at Go, a chesslike board game.

AI is rapidly growing from the exponential expansion of computing power, the use of large data sets to train machine learning systems, and significant and rapidly increasing private sector investment.

Just as cyber weapons are being developed by both major powers and underdeveloped nations, automated weaponry such as aerial drones and ground robots likely will be deployed by foreign militaries.

In the short term, advances in AI will likely allow more autonomous robotic support to warfighters, and accelerate the shift from manned to unmanned combat missions, the report says, noting that the Islamic State has begun using drones in attacks.

Over the long term, these capabilities will transform military power and warfare.

Russia is planning extensive automated weapons systems and according to the report plans to have 30 percent of its combat forces remotely controlled or autonomous by 2030.

Currently, the Pentagon has restricted the use of lethal autonomous systems.

Future threats could also come from swarms of small robots and drones.

Imagine a low-cost drone with the range of a Canada Goose, a bird which can cover 1,500 miles in under 24 hours at an average speed of 60 miles per hour, the report said. How would an aircraft carrier battle group respond to an attack from millions of aerial kamikaze explosive drones?

AI-derived assassinations also are likely in the future by robots that will be difficult to detect. A small, autonomous robot could infiltrate a targets home, inject the target with a lethal dose of poison, and leave undetected, the report said. Alternatively, automatic sniping robots could assassinate targets from afar.

Terrorists also are expected in the future to develop precision-guided improvised explosive devices that could transit long distances autonomously. An example would be autonomous self-driving car bombs.

AI also could be used in deadly cyber attacks, such as hacking cars and forcing them to crash, and advanced AI cyber capabilities also will enhance cyber warfare capabilities by overwhelming human operators.

Robots also will be able to inject poisoned data into large data sets in ways that could create false images for warfighters looking to distinguish between enemy and friendly aircraft, naval systems or ground weapons.

Electronic cyber robots in the future will automate the human-intensive process of both defending networks from attacks, and probing enemy networks and software for weaknesses used in attacks.

Another danger is that in the future hostile actors will steal or replicate military and intelligence AI systems.

The report urged the Pentagon to develop counter-AI capabilities for both offensive and defensive operations.

GPS SPOOFING AND USS McCAIN

One question being asked by the Navy in the aftermath of this weeks deadly collision between the destroyer USS John S. McCain and an oil tanker is whether the collision was the result of cyber or electronic warfare attacks.

Chief of Naval Operations Adm. John Richardson was asked about the possibility Monday and said that while there is no indication yet that outside interference caused the collision, investigators will examine all possibilities, including some type of cyber attack.

Navy sources close to the probe say there is no indication cyber attacks or electronic warfare caused the collision that killed 10 sailors as the ship transited the Straits of Malacca near Singapore.

But the fact that the McCain was the second agile Navy destroyer to be hit by a large merchant ship in two months has raised new concerns about electronic interference.

Seven died on the USS Fitzgerald, another guided-missile destroyer that collided with a merchant ship in waters near Japan in June.

The incidents highlight the likelihood that electronic warfare will be used in a future conflict to cause ship collisions or groundings.

Both warships are equipped with several types of radar capable of detecting nearby shipping traffic miles away. Watch officers on the bridge were monitoring all approaching ships.

The fact that crews of the two ships were unable to see the approaching ships in time to maneuver away has increased concerns about electronic sabotage.

One case of possible Russian electronic warfare surfaced two months ago. The Department of Transportations Maritime Administration warned about possible intentional GPS interference on June 22 in the Black Sea, where Russian ships and aircraft in the past of have challenged U.S. Navy warships and surveillance aircraft.

According to the New Scientist, an online publication that first reported the suspected Russian GPS spoofing, the Maritime Administration notice referred to a ship sailing near the Russian port of Novorossiysk that reported its GPS navigation falsely indicated the vessel was located more than 20 miles inland at Gelendzhik Airport, close to the Russian resort town of the same name on the Black Sea.

The navigation equipment was checked for malfunctions and found to be working properly. The ship captain then contacted nearby ships and learned that at least 20 ships also reported that signals from their automatic identification system (AIS), a system used to broadcast ship locations at sea, also had falsely indicated they were at the inland airport.

Todd Humphreys, a professor who specializes in robotics at the University of Texas, suspects the Russians in June were experimenting with an electronic warfare weapon designed to lure ships off course by substituting false electronic signals to navigation equipment.

On the U.S. destroyers, Mr. Humphreys told Inside the Ring that blaming two similar warship accidents on human negligence seems difficult to accept.

With the Fitzgerald collision fresh on their minds, surely the crew of the USS John McCain would have entered the waters around the Malacca Strait with extra vigilance, he said. And yes, its theoretically possible that GPS spoofing or AIS spoofing was involved in the collision. Nonetheless I still think that crew negligence is the most likely explanation.

Military vessels use encrypted GPS signals that make spoofing more difficult.

Spoofing the AIS on the oil tanker that hit the McCain is also a possibility, but would not explain how the warship failed to detect the approaching vessel.

One can easily send out bogus AIS messages and cause phantom ships to appear on ships electronic chart displays across a widespread area, Mr. Humphreys said

Mr. Humphreys said he suspects Navy investigators will find three factors behind the McCain disaster: The ship was not broadcasting its AIS location beacon; the oil tankers collision warning system may have failed or the Navy crew failed to detect the approaching tanker.

Contact Bill Gertz on Twitter @BillGertz.

View post:

Artificial intelligence called threat to humanity, compared to nuclear weapons: Report - Washington Times

Is AIOps the Answer to Your AI Woes? – RTInsights

Companies must make AIOps a vital part of company operations to survive the coming digital transformation.

AI adoption is going tobe a key component for business survival by 2025, based on a global study by Genpact, but companies still struggle with what that means andhow to accomplish it. More often than not, those big AI-driven initiatives endin failure. So, wheres the disconnect between the need for AI and itsimplementation?

According to the Harvard Business Review, theres one reason and one reason only that companieskeep missing the mark. If your business wants to survive in the next phase ofdigital transformation, you need AI Operations.

See also: AIOps Gaining Traction as Technology Accelerator

Businesses are sofocused on the shiny appeal of AI that they fail to consider how theyll actually use their new AI-initiatives.HBRs biggest lesson in all of this is the need to build an AI-integratedorganization from the ground up, i.e., building and managing AI to deliverresults.

Companies must take stockof existing systems and use AI-driven initiatives to facilitate those endresults. For example, companies could use contract management software toshorten the time from inquiries to signing new contracts. The infrastructuremust be there first.

This concept is morethan just the software. A business must invest in engineers and developers ableto identify key areas where AI could transform a process into something thatproduces results, and that requires more than simple development.

Much like DevOpsrevolutionized software development and DataOps is transforming big data, AIOpstakes the same approach of integration and continual insight. A proper AIOps loopsees a measurable end goal and can get there using AI-driven initiatives.

Businesses must addressthe layers of AIOps if they want to implement AI effectively. These layers arevital for companies that want to use AI to drive insights, transform businesspractices, and survive digital transformation.

Before you getdistracted by all of AIs shiny features, consider how it will integrate intoyour existing systems. AIOps is a competitive necessity, according to HBR.Companies must make AIOps a vital part of company operations to survive thecoming digital transformation.

The rest is here:

Is AIOps the Answer to Your AI Woes? - RTInsights

The Secret AI Testers inside Tom Clancy’s The Division – Gamasutra

The following blog post, unless otherwise noted, was written by a member of Gamasutras community.The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.

AI and Games is a crowdfunded YouTube series that explores research and applications of artificial intelligence in video games. You can support this work by visiting my Patreon page.

In collaboration with Ubisoft and Massive Entertainment, I present three blogs exploring the AI behind Tom Clancy's The Division 2, including excerpts from my interview with the Lead AI Programmer of the franchise, Philip Dunstan.

Part 1 of this series, where we discuss enemy AI design can be found here.

Meanwhile part 2, which explores open-world and systemic design can be found here.

Building a live-service game such as Tom Clancy's The Division, comes with all sorts of challenges. Ensuring the game is stable for players in a variety of online connections, handling the different ways players move through the world and exploring interactive systems and gameplay challenges, but more critically, checking that the game plays as expected so that players aren't getting frustrated because world events don't trigger or missions don't register as complete. While this is most certainly a challenge for quality assurance and testing teams, as the scale and complexity of these games increase, the workloads of QA teams explode in scale. And that's where artificial intelligence can help not just create in-game functionality, but change how the game is being developed.

In this final blog on the AI of The Division 2, I'm going to take a look at the secret AI players that playtest Tom Clancy's The Division with insights from Philip Dunstan: the lead AI programmer at developers Massive Entertainment. I'll be looking at the custom AI bots that are deployed to assess specific parts of the game and how the first games post-launch DLC changed how the game would be tested moving into Division 2.

Tom Clancy's The Division has not one, but two types of bots that are used to help test the games of the franchise: the server bots and the client bots. The Server bots - as the name might suggest - run natively on the server and don't interface with the game like a player would. As I'll explain in a minute, these bots behave completely different from real players and are designed to stress-test the limits of the Division servers. Meanwhile the client bots run as if they're playing a build of the game client-side. They assume control of the game instead of the player, adhering to all the same rules as a regular player, to a point that all the in-game systems genuinely believe that this player character is being controlled by a human. They don't have any special modifications that allow them to manipulate or cheat the game and are built to run on any platform, be it PC, Xbox or Playstation. Their job is to test how the game actually works for players: testing the main story missions, wandering the open world and gathering all sorts of performance stats along the way that help give the developers a stronger understanding of how the game will perform for players when they log into Washington.

The demand for these types of tools is ever increasing. As in-game worlds in live-service games continue to increase the number of potential bugs explodes exponentially. Even if you consider both Division games, it's not just the map of Washington DC being larger than Manhattan, but each update to both games not only introduces new content - which might have bugs in it - but it also can change or impact a lot of the existing content in the game - meaning even more bugs because you broke something that was already working. This is only made worse by the reality live service games need to be updated fairly frequently to maintain player engagement and these updates need to work, so the word of mouth continues to be strong. This is a problem that exceeds the capabilities of human testers: as more content is being built and existing content modified, not only does quality control need to be maintained on all the new content, but everything else that already exists in the game. This is thousands upon thousands of play hours and is increasingly difficult to balance. And sometimes, the requirements of testing exceed the number of available staff who can even sit down and play the game...

Philip Dunstan: "As you can imagine we're building servers that host a thousand players, but it's really difficult to get a thousand players to play at the same time. And especially if you want to know if your servers can stay up for a week it's difficult to find a thousand players that can play continuously to test the stability of your server while the game is in such an early stage of development.

As mentioned in a previous blog, the original Tom Clancy's The Division runs with what is known as a Server Bot: it's an AI player that logs into a Division server and plays around in the game. This is being used to test whether or not the games systems are operated as expected. As Philip explained, while the develop team really benefit from this, the actual AI they built was really simple and, well... it cheats a lot.

Philip Dunstan: "So very early on in the Division 1, we had these server bots that would connect to a game, they would... you know they're actually really stupid. They're not trying to mimic player behaviour at all. They just teleport around the world, they find NPCs to kill, they shoot the NPCs and then they teleport off to a different part of the world. And they've got god mode turned so they can't be killed and they just do this continuously and then every now and then they disconnect from the game and they reconnect or they group up into co-op sessions and they disconnect. We're testing our ability to you know group players, to create all the different phases for the players to join and disconnect. And then surprisingly it's extremely performance metrics out of these bots. Their performance metrics actually very closely matches the type of metrics we see in players, even though they're not trying play like a player."

"We had those in the Division, we honestly would not have been able to ship a stable Division 1 or Division 2. I mean Division 1 and Division 2 were both extremely stable games you know considering how many players we had after launch. If you look at this last year type thing, the number of like significant downtime causing issues that we've had has been extremely low. And we're able to do that because we're able to test it to an extent that we're satisfied through an automated method."

While the server bots were conceived from the very beginning, the client bots are a different story altogether and emerged from an interesting problem during the development of the first Division. But not at launch, rather with the second DLC update for the game The Underground.

The Underground opens up a whole new game mode in the Division accessible from the basement of your base of operations: the James A Farley post office building across the street from Madison Square Garden. In the underground players would complete procedurally generated missions comprised of different enemy factions hiding out in the tunnels underneath New York. And this is where it introduces a new problem: unlike the rest of the Division, if a mission is going to be procedurally generated how do you test each possible permutation to know it's going to work?

Philip Dunstan: "The client bots were interesting, the Underground, because it is procedurally generated had a sort of problem which had been unique up until that point. Up to that point we'd be able to test whether a level could be completed by having QC run through that level and see if we can complete it. We have a large test team at Ubisoft that is constantly playing through the levels testing things like 'is this level completable'. And that worked perfect fine for the launch of the Division and survival mode. But for underground, we had you know hundreds and thousands of different variations of the level. It no longer became possible to test this manually. We had a technical problem at the time as well that our navmesh generation wasn't consistent enough, that when we generate the navmesh for the underground level one of the variations might be playable, but later on when someone had moved some props around and we may have had a navmesh break on a subsequent generation. So it became not just impossible from a practical sense of how many testers you need, it just wasn't even feasible at all to manually test."

The Client bots were headed up by one Massive's partner studios working on the Division: Ubisoft Reflections based in the UK. As mentioned earlier, the team opted for the more challenging task of creating an entirely new system. The AI players are not based on the existing AI archetypes, instead it's a custom built AI layer that directly controls the players inputs. This helps keep all the development of these tools isolated from main enemy AI but as mentioned, it means every system in the game still believes that a human player is playing the game. The system was subsequently interfaced into the debug console and tools, allowing for a variety of game actions to bypass the controller layer and be processed by the player character. This means that just like a human, the actions it's trying to execute only work if the current game state permits it.

One of the first priorities for the bots was to the test navigation mesh coverage. Navigation meshes are the data structure baked into a level that tells AI characters where in the world they can move around. Without a working nav mesh, no friendly or enemy AI would be able to walk around the map. Hence if any of it is broken this needs to be identified immediately for designers and programmers to test. In addition, follow bots were built that allowed for AI players to follow human ones, once again checking how AI characters might be able to use the navigation mesh to move through complex environments and combat arenas. Plus simple combat behaviours that - while they didn't pay attention to their health or avoiding any hazards - would eliminate targets simply by turning the in-game camera towards an enemies head and then pulling the trigger.

But in-time this scaled up from the more low-level tests of movement space and simple combat, to being able to take on an entire mission. This requires a lot more complexity and interfacing with the separate mission system built into The Division's codebase given it needs to know what the objectives are at any point in time and naturally these shift throughout a given story mission. This requires a more nuanced process, whereby the bots kill all the enemies in an area, follow the path to the objective marker, destroying specific objects if expected to and triggering any and all interactions it finds within a radius of itself.

With the client bots successfully implemented, they could not only test hundreds of permutations of the Underground missions over a weekend, but they could also run tests overnight on all the main story missions from the first game.

Philip Dunstan: "And this was y'know, really successful for underground, it became an important part of our tool for walkthroughs, but also even more importantly I think leading into Division 2 it became even more successful as a means to test our client performance through a level. You could have a whole suite of consoles playing through the level and recording their FPS and memory usage every night or 24/7 and creating reports from that. And same for the open world, we could have a group of client bots moving through the open world and finding parts of the world where performance becomes a problem. So they became like a really important part of console performance testing and they're still being used like that on Division 2."

Transitioning this technology from Division 1 to Division 2 still required a lot of extra work, as well as a change in workflow for both testers and the level and gameplay designers. The progression of missions in the Division 2 is not as linear as it was in the first game, hence the bots might become confused in certain parts of the mission as to how to proceed. So throughout development on Division 2, invisible testing markup is laced throughout each mission of the game by the level designers. It's completely invisible to players and has zero impact on our experience of the game, but the client bots can read this mark-up as directions for continuing their work. Mission tests are ran nightly and help ensure errors don't creep through into production as they identify mission content failing or game systems not executing as expected. There is also a separate error type for logging when the client bot wasn't working as expected and the developers strive to ensure all three of these categories have zero failures in them at all times.

In addition, the open-world testing is used by the QA and Tech Art teams on the game to identify areas of the world where there are performance overheads and they report specific bugs once they've looked at the data. As the client bots visit all playable space within the world, they identify areas of the world they get stuck in and cannot return from as well as framerate drops that could be due to high poly counts or issues with textures or particle systems in the environment.Lastly, there is also the AI Gym: which is dedicated to testing both the bots functionality itself, but also core gameplay mechanics should changes be made.

There are still limitations to what these systems can do, largely in part due to the complexities of the game world that would seem intuitive to a player, but the client bot might need some extra hand holding. And of course despite this big push into automation, there's still a lot of value gained from having people sit down and play the game as well.

Philip Dunstan: "There's definite restrictions to what our client bots can do. Again they're not really trying to play like a human, we're not trying to model human play. They move through the levels on a 'golden path', a hand-placed level-designed path on this is how you move through the level. They need to know how to interact with the doors or they need to know how to interact with the laptops to unlock the next part of the level. So they do require some manual scripted setup. So they're not really playing like a player would play. But they still provide a lot of benefit even with those restrictions.

You still need to have dev testers y'know, testing that you can't walk off, there aren't nav mesh blockers preventing you from getting to parts of levels because the client bot will move along the golden path and not check the areas of the combat space. But you get an early sort of smoke-test system of saying y'know 'is there something significantly wrong with this level'?"

As games continue to become larger and more complex, there is a real need to automate critical aspects of development. Be it testing frameworks, batch processes of art pipelines, animation controllers, design tools and more, all of it serves the needs of the development team. Allowing for programmers, artists and designers to focus their efforts on delivering the best game they can. Artificial Intelligence is slowly changing the way in which video games are built and is being applied in ways that players would never really think about. The real achievement is by employing AI in these new and pragmatic ways, it helps keeps the problems that can emerge in game development manageable. It keep projects on track and in budget. But it's important that players understand the challenges of how games are made: be it the Division 2, other Ubisoft projects or the industry as a whole.

Special thanks to Ubisoft for the opportunity to work with them on this project. And of course to my patrons who crowdfund the AI and Games series.

View post:

The Secret AI Testers inside Tom Clancy's The Division - Gamasutra

Joint Statement on the Creation of the Global Partnership on Artificial Intelligence – JD Supra

[co-author: Adam Perkins, Trainee Solicitor]

On June 15, 2020, the Government of the United Kingdom issued a joint statement announcing the creation of the Global Partnership on Artificial Intelligence (GPAI) along with 14 other founding members, including the European Union and the United States of America.

As announced, GPAI is an international partnership that will aim to promote the responsible development and use of Artificial Intelligence (AI) in a human-centric manner. This means developing and deploying AI in a way that is consistent with human rights, fundamental freedoms and shared democratic values. GPAIs aim is to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.

The values which GPAI endorses reflect the core AI principles as promoted by the Organisation for Economic Co-operation and Development (OECD) in the May 2019 OECD Council Recommendation on AI. The OECD will be the host of GPAIs Secretariat in Paris, and GPAI will draw upon the OECDs international AI policy leadership. It is thought that this integration will strengthen the evidence base for policy aimed at responsible AI. In addition, GPAI has stated that it is looking forward to working with other interested countries and partners.

Centres of Expertise in Montreal and Paris will provide research and administrative support to GPAI, while the GPAI Secretariat will lend support to GPAIs governing bodies, consisting of a council and steering committee. GPAI will engage in scientific and technical work and analysis, bringing together experts within academia, industry and government to collaborate across the following four initial working groups:

The outlook of these working groups appears to reflect GPAIs recognition of the potential for AI to act as a catalyst for sustainable economic growth and development, providing that it can be done in an accountable, transparent and responsible manner.

GPAIs short term priority, however, is to investigate how AI can be used to help with the response to, and recovery from, COVID-19.

The first annual GPAI Multistakeholder Experts Group Plenary is planned to take place in December 2020.

The creation of GPAI is an exciting new step in the global effort to harvest the possibilities which AI offers in an ethical and responsible way, minimizing the risks to individuals rights and freedoms. We will be monitoring its progress.

Link:

Joint Statement on the Creation of the Global Partnership on Artificial Intelligence - JD Supra

AI And Account Based Marketing In A Time Of Disruption – Forbes

Getty

We dont know how the massive shifts in consumer behavior brought on by the COVID-19 pandemic will evolve or endure.But we do know that as our lives change, marketers data change.Both the current impact and the future implications may be significant.

I asked Alex Atzberger, CEO of Episerver, a digital experience company, to put the issues in perspective.

Paul Talbot:How is AI holding up? Has the pandemic impacted the quality of data used to feed analytic tools that help marketers create both strategic and tactical scenarios and insights?

Alex Atzberger:There is more data and more need for automation and AI now than ever. Website traffic is up, and digital engagement is way up due to COVID-19.

Business leaders and marketers now need automation and AI to free up headspace as they have to deal with so many fires.

Many marketers rely on personalization from AI engines that run in the background so that they can adjust their messaging to our times. AI is a good thing for them right now. Theyre able to get data faster, analyze faster and make better decisions.

However, they need to be aware of what has changed. For example, some of the data inputs may not be as good as before as people work from home and IP addresses are no longer identifying the company someone is with.

Talbot:Given the unknowns we all face, how can marketing strategy be adjusted thoughtfully?

Atzberger:A practitioners time horizon for strategy shortens dramatically in crisis, and you need to spend more time on it. Planning is done in weeks and months, and you need to be ready to re-plan, especially since you have limited visibility into demand.

It can still be done thoughtfully but needs to adapt to the new situation and requires input from sales, partners and others on what channels and activities are working. The more real-time you can assess what is working, the better you can adjust and plan for the future.

Talbot:On a similar note, how have coronavirus disruptions altered the landscape of account-based marketing?

Atzberger:It has created massive disruptions. ABM depends on being able to map visitors to accounts. We see companies where that mapping ability has dropped 50% since working from home started. This is a big challenge.

A lot of the gains in ABM in recent years rests on our ability to target ads, content, direct sales team efforts and look at third-party intent signals. Without a fundamental piece of data, the picture is fuzzy again. Its like being fitted with a worse prescription of glasses you just cant see as clearly.

Talbot:With the soaring numbers of people working from home, how does this impact marketing strategy for the B2B organization?

Atzberger:In a big way. Anything based on account is going to be affected because its now more difficult to identify these buyers who are at home and look the same.

Direct mail programs are a big challenge because you cant really send stuff to their homes, thats a little creepy. Events are severely impacted too and sponsoring or attending an online version of a big industry trade show just isnt quite the same thing.

The marketing mix has to shift, your website has to work harder, your emails have to work harder, webinars have to work harder, all these digital channels will need to deliver much more to make up for systemic softness in other areas.

Talbot:Any other insights youd like to share?

Atzberger:We like to say, you are what you read. Rather than relying on IP addresses, you can 1:1 personalize content based on a visitors actual site activity.

This is what ABM is all about: to figure out whats more relevant for a person based on their industry. Now leapfrog that and go to the individual to act on what shes interested in at that moment. The current crisis might give you the best reason for change.

Originally posted here:

AI And Account Based Marketing In A Time Of Disruption - Forbes

Intel, Qualcomm, Google, and NVIDIA Race to Develop AI Chips and Platforms – All About Circuits

Artificial intelligence labs race to develop processors that are bigger, faster, stronger.

With major companies rolling out AI chips and smaller startups nipping at their heels, theres no denying that the future of artificial intelligence is indeed already upon us. While each boasts slightly different features, theyre all striving to provide ease of use, speed, and versatility. Manufacturers are demonstrating more adaptability than ever before, and are rapidly developing new versions to meet a growing demand.

In a marketplace that promises to do nothing but grow, these four are braced for impact.

The Verge reports that Qualcomms processors account for approximately 40% of the mobile market, so their entry into the AI game is no surprise. Theyre taking a slightly different approach thoughadapt existing technology that utilizes Qualcomms strengths. Theyve developed a Neural Processing Engine, which is an SDK that allows develops to optimize apps to run different AI applications on Snapdragon 600 and 800 processors. Ultimately, this integration means greater efficiency.

Facebook has already begun using its SDK to speed up augmented reality filters within the mobile app. Qualcomms website says that it may also be used to help a devices camera recognize objects and detect object for better shot composition, as well as make on-device post-processing beautification possible. They also promise more capabilities via the virtual voice assistant, and assure users of the broad market applications--from healthcare to security, on myriad mobile and embedded devices, they write. They also boast superior malware protection.

It allows you to choose your core of choice relative to the power performance profile you want for your user, said Gary Brotman, Qualcomm head of AI and machine learning.

Qualcomms SDK works with popular AI frameworks, including Tensor Flow, Caffe, and Caffe2.

Googles AI chip showed up relatively early to the AI game, disrupting what had been a pretty singular marketplace. And Googles got no plans to sell the processor, instead distributing it via a new cloud service from which anyone can build and operate software via the internet that utilizes hundreds of processors packed into Google data centers, reports Wired.

The chip, called TPU 2.0 or Cloud TPU, is a followup to the initial processor that brought Googles AI services to fruition, though it can be used to train neural networks and not just run them like its predecessor. Developers need to learn a different way of building neural networks since it is designed for Tensorflow, but they expectgiven that the chips affordabilitythat users will comply. Google has mentioned that researchers who share their research with the greater public will receive access for free.

Jeff Dean, who leads the AI lab Google Brain, says that the chip was needed to train with greater efficiency. It can handle180 trillion floating point operations per second. Several chips connect to form a pod, that offers 11,500 teraflops of computing power, which means that it takes only six hours to train 32 CPU boards on a portion of a podpreviously, it took a full day.

Intel offers an AI chip via the Movidius Neural Compute Stick, which is a USB 3.0 device with a specialized vision processing unit. Its meant to complement the Xeon and Xeon Phi, and costs only $79.

While it is optimized for vision applications, Intel says that it can handle a variety of DNN applications. They write, Designed for product developers, researchers and makers, the Movidius Neural Compute Stick aims to reduce barriers to developing, tuning and deploying AI applications by delivering dedicated high-performance deep-neural network processing in a small form factor.

The stick is powered by a VPU like what you might find in smart security cameras, AI drones, and industrial equipment. It can be used with trained Caffe framework-based feed-forward Convolutional Neural Network or the user may choose another pre-trained network, Intel reports. The Movidius Neural Compute Stick supports Cnn profiling, prototyping, and tuningworkflow,provides power and data over a single USB Type A port, does not require cloud connectivity, and runs multiple devices on the same platform.

From Raspberry Pi to PC, the Movidius Neural Compute Stick can be used with any USB 3.0 platform.

NVIDIA was the first to get really serious about AI, but theyre even more serious now. Their new chipthe Tesla V100is a data center GPU. Reportedly, it made enough of a stir that itcaused NVIDIA's shares to jump 17.8% on the day following the announcement.

The chip stands apart in training, which typically requires multiplying matrices of data a single number at a time. Instead, the Volta GPU architecture multiplies rows and columns at once, which speeds up the AI training process.

With 640 Tensor Cores,Volta is five times faster than Pascal and reduces the training time from 18 hours to 7.4 and uses next generation high-speed interconnect technology which, according to the website, enables more advanced model and data parallel approaches for strong scaling to achieve the absolute highest application performance.

Heard of more AI chips coming down the pipe? Let us know in the comments below!

Read the original:

Intel, Qualcomm, Google, and NVIDIA Race to Develop AI Chips and Platforms - All About Circuits

Demystifying AI: Can Humans and AI coexist to create a hyper-productive HumBot organisation? – The Indian Express

New Delhi | Updated: June 12, 2020 11:29:24 am

By Ravi Mehta, Sushant Kumaraswamy, Sudhi H and Prashant Kumar

In the last few years, as powerful AI technologies have become more mainstream, many apprehensions have been raised on the role AI will play in the evolution of work. While many opinions have been expressed on the predatory role of AI (for example, AI will replace most of the work humans do), we offer an alternate view of the role AI can play in our lives and especially in our organizations. The famous poet, Robert Frost, once beautifully articulated that Two roads diverged in a wood, and I took the one less traveled by, and that has made all the difference. It seems that, as business leaders, we are at a similar two roads diverged in a wood moment, and the decisions we take on the role AI will play in our organisations will probably significantly change the evolutionary trajectory of our organisations. We believe that organisations can benefit much more from following an augmentation strategy (as compared to replacement strategy) as it relates to AI. Our experience has shown that, if augmented effectively with unique human capabilities, AI has the potential to significantly transform the three key pillars of organisations work, worker and workplace and enable creation of a hyper-productive HumBot (Human + Robot) organisation.

Work is a fundamental and defining component of a human life. While technology advancements have impacted the way work is done, humans still spend a lot of time doing work that can be best done by a machine (Bot). By freeing up humans to focus more on those tasks (for example, empathy and inspiration) that maximises their potential, we are likely to significantly increase organisational productivity. However, to achieve this, we will need to redesign work to optimally utilise and integrate the best of both human and bot capabilities. For example, we can leverage the bots ability to do high-volume, complex data collation task (for example, download bulk data from multiple systems at different times and do pattern and anomaly detection) and augment that with uniquely human skills (for example, deep enquiry, crisp articulation) to create a proactive insights platform that can significantly enhance quality of decision making throughout the organisation.

As work gets redesigned through infusion of AI technologies, the role of the worker (doing the work) is also likely to change significantly. While some roles may get replaced by AI, we believe AI technologies can lead to two significant benefits for workers (a) it can create new roles that do not exist today (b) it can transform existing roles to make them more impactful. For example, while AI may automate a transactional process like invoice processing (and hence replace the work of people processing invoices), it can create new higher value-added roles for better managing the working capital of the organisation and to enhance the quality of relationship the organisation has with its ecosystem of vendors and partners. Additionally, AI has the potential to further increase the effectiveness of these new roles by acting as personalised digital augmenters (for example, alert the vendor relationship manager on a significant news about an important vendor and proactively perform quick customised correlation analysis to provide next best moves for consideration). By embracing (rather than fearing and resisting) AI, we have the opportunity to enhance the quality of work and provide human workers more opportunities to find joy, meaning and fulfilment in their work.

Also Read:Automation and AI in a changing business landscape

As the work gets redesigned and the role of worker gets enhanced, the workplace is also expected to change significantly. COVID-19 has taught us that humans are resilient enough to change their behaviours and attitudes quickly and dramatically. As work from anywhere becomes more common, the definition of workplace may become more fluid. While this increased fluidity may lead to increased productivity and better worker morale, organisations will need to consider creating a more secure, responsive and collaborative hybrid (virtual and physical) workplace. AI technologies (for example, virtual whiteboards that convert speech to text and vice versa) can help create these hybrid workplaces to help human workers achieve better outcomes in a faster, smarter and more secure manner.

As business leaders navigate the proverbial two-roads-diverged-in-a-wood moment as it relates to defining the right AI strategy for their organisations, we suggest also considering the augmentation strategy (as compared to the replacement strategy we hear most about). Defining and implementing the right AI strategy can help organisations to create a hyper-productive HumBot organisation in which a new type of work is performed by a new type of worker in a new type of (hybrid) workplace.

Ravi Mehta is Partner; Sushant Kumaraswamy, Director; Sudhi. H, Associate Director; and Prashant Kumar, Senior Consultant at Deloitte India

The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Technology News, download Indian Express App.

IE Online Media Services Pvt Ltd

Read more:

Demystifying AI: Can Humans and AI coexist to create a hyper-productive HumBot organisation? - The Indian Express

Discover Unlimited Possibilities with OpenAI’s AI Tool GPT-3 – Analytics Insight

Developed by Elon Musk-owned OpenAI, GPT-3 is the autoregressive language model that deploys deep learning to produce human-like text. OpenAIs GPT-3 is currently the largest artificial intelligence language model, marred in debates that range from whether it is a step closer to AGI (Artificial General Intelligence) or it is the first step toward creating this sort of superintelligence.

GPT-3 (generative pre-trained transformer.) is the third in a series of autocomplete tools designed by OpenAI. GPT-3 program has been trained on a huge corpus of text stored as billions of weighted connections between the different nodes in GPT-3s neural network. The program looks and finds patterns without any guidance, which it then uses to complete text prompts. If you input the word fire into GPT-3, the program knows, based on the weights in its network, that the words alarm and water are much more likely to follow than soil or forests.

GPT-3is trained on 175 billion parameters that are more than 100 times more than its predecessor and ten times more than comparable programs, to complete a mind-boggling array of autocomplete tasks, whose sharpness astonishes mankind!

The dataset GPT-3 was entirety trained on-

The English Wikipedia, spanning some 6 million articles which makes up only 0.6 percent of its training data.

Digitized books and various web links, including news articles, recipes, and poetry, coding manuals, fanfiction, religious prophecy, and whatever else imaginable!

Any type of good and bad text that has been uploaded on the internet including the potentially harmful conspiracy theories, racist screeds, pseudoscientific textbooks, and the manifestos of mass shooters.

Its hardly comprehensive, but heres a small sample of things people have created with GPT-3:

A chatbot that talks to historical figures

Because GPT-3 has been trained on so many digitized books, it has assimilated a fair amount of knowledge relevant to specific thinkers. Leverage GPT-3 to make a chatbot talk like the philosopher Bertrand Russell, and ask him to explain his views. Fictional characters are as accessible to GPT-3 as historical ones. Check out the exciting dialogue between Alan Turing and Claude Shannon, interrupted by Harry Potter!

Makes your own quizzes

Definitely a blessing to the education system, GPT-3 is an awesome helper of teachers as well as students. It will generate Quizzes for practice on any topics and also explain the answers to these questions in detail, helping students to learn anything from anyone be it robotics from Elon Musk, physics from Newton, relativity theory from Einstein, and literature from Shakespeare.

A question-based search engine

Trained on the entire Wikipedia, GPT-3 is like Google but for questions and answers. Type a question and GPT-3 directs you to the relevant Wikipedia URL for the answer.

Answer medical queries

A medical student from the UK used GPT-3 to answer health care questions. The program not only gave the right answer but correctly explained the underlying biological mechanism.

Style transfer for text

The input text is written in a certain style and GPT-3 can change it to another. In an example on Twitter, a user input text in plain language and asked GPT-3 to change it to legal language. This transforms inputs from my landlord didnt maintain the property to The Defendants have permitted the real property to fall into disrepair and have failed to comply with state and local health and safety codes and regulations.

Compose its own Music

Guitar tabs are shared on the web using ASCII text files, which comprise part of GPT-3s training dataset. Naturally, that means GPT-3 can generate music itself after being given a few chords to start.

Write creative fiction

This is a wide-ranging area within GPT-3s skillset but an incredibly impressive one. The best collection of the programs literary samples comes from independent researcher and writer Gwern Branwen who has collected a trove of GPT-3s writing. It ranges from a type of one-sentence pun known as a Tom Swifty to poetry in the style of Allen Ginsberg, T.S. Eliot, and Emily Dickinson to Navy SEAL copypasta.

Autocomplete images, not just text

The basic GPT architecture can be retrained on pixels instead of words, allowing it to perform the same autocomplete tasks with visual data than it does with text input.

Solving language and syntax puzzles

You can show GPT-3 certain linguistic patterns (Like truck driver becomes driver of truck and chocolate cake becomes cake made of chocolate) and it will complete any new prompts you show it correctly. However, being in the nascent stage, a lot of developments are still bound to happen. As computer science professor Yoav Goldberg whos been sharing lots of these examples on Twitter puts it, such abilities are new and super exciting for AI, but they dont mean GPT-3 has mastered language.

Code generation based on text descriptions

Describe a design element or page layout of your choice in simple words and GPT-3 spits out the relevant code. Users have used GPT-3 to generate code for a machine learning model, just by describing the dataset and required output. In another example, in a layout generator, you have to describe any layout you want, and GPT-3 will generate the JSX code for you.

A world of Unlimited Possibilities has just Begun!

Read more from the original source:

Discover Unlimited Possibilities with OpenAI's AI Tool GPT-3 - Analytics Insight