Fears missing ISIS millions are hidden in cryptocurrency ready for use as war chest – The National

ISIS is using cryptocurrency platforms to conceal donations and get around financial security measures, experts have revealed after a surge in advertising for donations.

They fear the terrorist groups missing $300 million (Dh1.1 billion) war chest could have been transferred into a digital currency to hide it from the authorities.

Last year ISIS used cryptocurrency to fund the Easter Sunday terrorist attack in Sri Lanka, which killed more than 250 people when suicide bombers attacked churches and hotels in quick succession.

The Counter Extremism Project, a think tank, tracked the trend in a new report, Cryptocurrencies and Financing of Terrorism: Threat Assessment and Regulatory Challenges, launched in an online seminar on Monday.

Its director, Hans-Jakob Schindler, who has worked in the UNs security council monitoring unit for ISIS and Al Qaeda, told The National the authorities have searched for the groups missing war chest since 2017.

Im wondering if from 2017 to 2020 there has been $300m that we have not found and thats why Im thinking this might have been one of the ways it might have been used, Mr Schindler said.

This would be an ideal storage mechanism until it is needed. If done right, it would be unfindable and unseizable for most governments.

ISIS is believed to be the first terrorist group to be prosecuted in court for cryptocurrency activities.

US teenager Ali Shukri Amin was jailed for 11 years in 2015 for providing ISIS supporters with an online manual on how to use Bitcoin to conceal donations.

Mr Schindler said there had been consistent cases of ISIS and Hamas using cryptocurrency since 2014.

From the get go, ISIS has been clearly interested in what can be done with this new technology, he said.

Dr Schindler said that when digital transactions were broken up into smaller transactions it was next to impossible for them to be traced back.

Cryptocurrency is good for terrorists if they become public because it enables more people to fund them without running the risk of being discovered or stopped, he said.

Dr Schindler is urging EU governments to collaborate on a regulatory framework for tighter regulations.

For once you can be ahead of the curve and have time now to work on regulations before it becomes a $100m problem, he said.

Yaya Fanusie, of the Foundation for the Defence of Democracies think tank, has been studying terrorist groups use of cryptocurrency since 2016.

Mr Fanusie said he first noticed a rise in advertisements for digital donations on crowdfunding sites.

He said the publication of ISISs digital currency handbook in 2014 was an important milestone.

It shows exactly when supporters of the group looked at ways to make money throughout the world for the ISIS battlefield, Mr Fanusie said.

It has grown in sophistication. Instead of one blockchain address there are multiple addresses that are difficult for law enforcers to track.

"We are talking software you can download and you do not have to go through an exchange.

He said the saving grace so far was that people need to cash out and that limits their movements.

We are going to have to be ahead of the game, Mr Fanusie said.

Last year a report by US security group the National Security Research Division called for international co-operation between law enforcement and the intelligence community in dealing with the problem.

The speed at which these technologies are adopted, and the details of which technologies are used and how they are deployed, are critical uncertainties that have important operational impacts, it said.

This analysis suggests that regulation and oversight of cryptocurrencies, along with international co-operation between law enforcement and the intelligence community, would be important steps to prevent terrorist organisations from using cryptocurrencies to support their activities.

Updated: May 19, 2020 02:51 AM

See the original post here:
Fears missing ISIS millions are hidden in cryptocurrency ready for use as war chest - The National

Cryptocurrency Market 2020 Size & Share Outlook with COVID-19 Impact Analysis and Forecast to 2026 – Cole of Duty

Facts & Factors Market Researchadded a recent report onCryptocurrency Market By Type (Bitcoin, Ethereum, Ripple, Litecoin, Dashcoin, Others), By Component (Hardware, Software), By Process (Transaction, Mining), and By End-Users Analysis (Banking, Real Estate, Stock Market & Virtual Currency, Others): Global Industry Outlook, Market Size, Business Intelligence, Consumer Preferences, Statistical Surveys, Comprehensive Analysis, Historical Developments, Current Trends, and Forecasts, 20202026to its research database. The Cryptocurrency Market research report is an output of a brief assessment and an extensive analysis of practical data collected from the global industry.

This specialized and expertise oriented industry research report scrutinizes the technical and commercial business outlook of the Cryptocurrency industry. The report analyzes and declares the historical and current trends analysis of the Cryptocurrency industry and subsequently recommends the projected trends anticipated to be observed in the Cryptocurrency market during the upcoming years.

TheCryptocurrency marketreport analyzes and notifies the industry statistics at the global as well as regional and country levels to acquire a thorough perspective of the entire Cryptocurrencyt market. The historical and past insights are provided for FY 2016 to FY 2019 whereas projected trends are delivered for FY 2020 to FY 2026. The quantitative and numerical data is represented in terms of value from FY 2016 2026.

The quantitative data is further underlined and reinforced by comprehensive qualitative data which comprises various across-the-board market dynamics. The rationales which directly or indirectly impact the Cryptocurrency industry are exemplified through parameters such as growth drivers, restraints, challenges, and opportunities among other impacting factors.

Request an Exclusive Free Sample Report of Cryptocurrency Market:https://www.fnfresearch.com/sample/cryptocurrency-market-by-type-bitcoin-ethereum-ripple-litecoin-640

Our Every Free Sample Includes:

COVID-19 Impact Analysis, A research report overview, TOC, list of tables and figures, an overview of major market players, and key regions included.

Some of Top Market Players Analysis Included in this Report:

The Market Player Analysis based on some of below Factors:

This research report provides forecasts in terms of CAGR, and Y-O-Y growth. This helps to understand the overall market and to recognize the growth opportunities in the global Cryptocurrency Market. The report also includes a detailed profile and information of all the major market players currently active in the global Cryptocurrency Market. The companies covered in the report can be evaluated based on their latest developments, financial and business overview, product portfolio, key trends in the market, long-term and short-term business strategies by the companies to stay competitive in the market.

The global Cryptocurrency Market size & trends are classified based on the types of products, application segments, and end-user. Each segment expansion is assessed together with the estimation of their growth in the upcoming period. The related data and statistics collected from the regulatory organizations are portrayed in the Cryptocurrency Market report to assess the growth of each segment.

The global Cryptocurrency Market size & trends are classified based on the types of products, application segments, and end-user. Each segment expansion is assessed together with the estimation of their growth in the upcoming period. The related data and statistics collected from the regulatory organizations are portrayed in the Cryptocurrency Market report to assess the growth of each segment.

Request Customized Copy of Report @https://www.fnfresearch.com/customization/cryptocurrency-market-by-type-bitcoin-ethereum-ripple-litecoin-640

(We customize your report according to your research need. Ask our sales team for report customization).

Available Customization Options:

The Cryptocurrency Market can be customized to the country level or any other market segment. Besides this, Report understands that you may have your own business need, hence we also provide fully customized solutions to clients.

KEY REPORT POINTERS & HIGHLIGHTS:

To Know More about Discount on Specific Report, Kindly Mail us at [[emailprotected]]with Your requirement. We will send you a quote and discount offer, if available.

About Us:

Facts & Factorsis a leading market research company and offers customized research reports and consulting services. Facts & Factors aims at management consulting, industry chain research, and advanced research to assist our clients by providing planned revenue model for their business. Our report and services are used by prestigious academic institutions, start-ups, and companies globally to understand the international and regional business background. Our wide-ranging database offers statistics and detailed analysis of different industries worldwide that help the clients in achieving sustainable progress. The well-organized reports help clients in developing strategies and making informed business decisions.

Contact Us:

Facts & Factors

Global Headquarters

Level 8, International Finance Center, Tower 2,

8 Century Avenue, Shanghai,

Postal 200120, China

Tel: +8621 80360450

E-Mail:[emailprotected]

Web:https://www.fnfresearch.com

See the original post here:
Cryptocurrency Market 2020 Size & Share Outlook with COVID-19 Impact Analysis and Forecast to 2026 - Cole of Duty

Artificial intelligence | Memory Alpha | Fandom

Artificial intelligence (or computer intelligence) was a term used in the fields of cybernetics, computer science, and related disciplines, to describe computer hardware and software sophisticated enough to reason independently, form new conclusions, and alter its own responses based on real life experiences. Its goal was to simulate the humanoid brain's functions, with the benefit that hardware equipped with this software could be autonomous, including learning in new situations. (TNG: "The Measure Of A Man", "Home Soil"; VOY: "The Thaw", "Warhead") Artificial lifeforms were the pinnacle of artificial intelligence. (TNG: "The Offspring")

Artificial intelligence software was created for many different reasons, which were normally programmed into the software. In some cases, they were used to perform specific functions, such as guiding a missile or managing a large network of space stations. (ENT: "The Forgotten", "The Council", "Countdown"; VOY: "Warhead") In other cases, they were used to create androids and programmed to follow their designers' instructions or simply to exist and explore their existence. (TOS: "What Are Little Girls Made Of?"; TNG: "The Measure Of A Man", "The Offspring") They could also be used to create holograms. (VOY: "The Thaw", "Warhead")

Advanced computer hardware was required to run artificial intelligence software, such as a positronic brain. (TNG: "Datalore", "The Measure Of A Man", "The Offspring")

Artificial intelligence was considered an advanced technology and required advanced degrees in order to design. (TNG: "Home Soil", "The Measure Of A Man") It was also considered necessary tactical information by other species. (VOY: "Blood Fever")

Artificial intelligence software, while being able to reason and make decisions, was widely regarded as not able to quite achieve the sophistication of a humanoid brain, in particular, its ability for original thought and creativity. (TNG: "Elementary, Dear Data"; VOY: "The Thaw") However, because of the close similarity of the behavior to real humanoid behavior, it was an open question in many societies as to whether this software resulted in machines that could achieve the capacity for self-awareness, sentience or emotional feeling. It resulted in significant controversy. (TNG: "The Measure Of A Man"; Star Trek Generations; VOY: "Author, Author")

Even certain highly advanced alien computers may or may not meet the criteria for artificial intelligence. Landru ruled Beta III for almost six thousand years, but Kirk and Spock deduced its nature in less than a day. (TOS: "The Return of the Archons")

As of 2364, Doctor Kurt Mandl held advanced degrees in computer science and artificial intelligence. Natasha Yar therefore felt he could have re-programmed a laser drill to murder Arthur Malencon. (TNG: "Home Soil")

When the Borg were encountered by the USS Enterprise-D, it was speculated that the biological lifeform they began as was "almost immediately after birth" given artificial implants, and according to William T. Riker, "apparently the Borg have developed the technology to link artificial intelligence directly into the humanoid brain." (TNG: "Q Who")

In 2365, Ensign Sonya Gomez rationalized her addition of the word "please" to a request for hot chocolate from the replicator as being reasonable given that it was intelligent circuitry. She felt that working with artificial intelligence all the time tended to be dehumanizing and suggested courtesy as the remedy, demonstrating it again by thanking the replicator for her beverage. (TNG: "Q Who")

Artificial intelligence was still an active field of research in the 24th century. In late 2367, Geordi La Forge wanted to attend a seminar about artificial intelligence on Risa. (TNG: "The Mind's Eye")

In the 2370s an individual of the Think Tank was an artificial intelligence. (VOY: "Think Tank")

In the 2370s, the Druoda created devices with artificial intelligence, built into their weapons, identified as Series 5 long-range tactical armor units. These intelligences were programmed with a singular tactical mission. (VOY: "Warhead")

The Sakari interrogated Chakotay of the USS Voyager about his crew's technology, including medical and artificial intelligence. (VOY: "Blood Fever")

The hibernation computer system of the Kohl settlement created a virtual environment which required humanoid brain activity for its characters to function. Voyager's crew suggested replacing its inhabitants with an artificial intelligence, but it was quickly decided that would be insufficient. (VOY: "The Thaw")

The Sphere-Builders built a number of spherical devices in the Delphic Expanse in order to colonize it in the 22nd century. Each sphere had an artificial intelligence in it which ran the sphere's infrastructure and worked as a network with all other spheres for defense and to generate gravimetric energy in a certain pattern among the spheres. Some of the spheres were control spheres that set directions. (ENT: "The Forgotten", "The Council", "Countdown")

Continued here:
Artificial intelligence | Memory Alpha | Fandom

Artificial intelligence and Machine learning made simple

Lately, Artificial Intelligence and Machine Learning is a hot topic in the tech industry. Perhaps more than our daily lives Artificial Intelligence (AI) is impacting the business world more. There was about $300 million in venture capital invested in AI startups in 2014, a 300% increase than a year before (Bloomberg). AI is everywhere, from gaming stations to maintaining complex information at work. Computer Engineers and Scientists are working hard to impart intelligent behavior in the machines making them think and respond to real-time situations. AI is transiting from just a research topic to the early stages of enterprise adoption. Tech giants like Google and Facebook have placed huge bets on Artificial Intelligence and Machine Learning and are already using it in their products. But this is just the beginning, over the next few years, we may see AI steadily glide into one product after another.

According to Stanford Researcher, John McCarthy, Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. Artificial Intelligence is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

Simply put, AIs goal is to make computers/computer programs smart enough to imitate the human mind behaviour.

Knowledge Engineering is an essential part of AI research. Machines and programs need to have bountiful information related to the world to often act and react like human beings. AI must have access to properties, categories, objects and relations between all of them to implement knowledge engineering. AI initiates common sense, problem-solving and analytical reasoning power in machines, which is much difficult and a tedious job.

AI services can be classified into Vertical or Horizontal AI

These are services focus on the single job, whether thats scheduling meeting, automating repetitive work, etc. Vertical AI Bots performs just one job for you and do it so well, that we might mistake them for a human.

These services are such that they are able to handle multiple tasks. There is no single job to be done. Cortana, Siri and Alexa are some of the examples of Horizontal AI. These services work more massively as the question and answer settings, such as What is the temperature in New York? or Call Alex. They work for multiple tasks and not just for a particular task entirely.

AI is achieved by analysing how the human brain works while solving an issue and then using that analytical problem-solving techniques to build complex algorithms to perform similar tasks. AI is an automated decision-making system, which continuously learn, adapt, suggest and take actions automatically. At the core, they require algorithms which are able to learn from their experience. This is where Machine Learning comes into the picture.

Artificial Intelligence and Machine Learning are much trending and also confused terms nowadays. Machine Learning (ML) is a subset of Artificial Intelligence. ML is a science of designing and applying algorithms that are able to learn things from past cases. If some behaviour exists in past, then you may predict if or it can happen again. Means if there are no past cases then there is no prediction.

ML can be applied to solve tough issues like credit card fraud detection, enable self-driving cars and face detection and recognition. ML uses complex algorithms that constantly iterate over large data sets, analyzing the patterns in data and facilitating machines to respond different situations for which they have not been explicitly programmed. The machines learn from the history to produce reliable results. The ML algorithms use Computer Science and Statistics to predict rational outputs.

There are 3 major areas of ML:

In supervised learning, training datasets are provided to the system. Supervised learning algorithms analyse the data and produce an inferred function. The correct solution thus produced can be used for mapping new examples. Credit card fraud detection is one of the examples of Supervised Learning algorithm.

Supervised Learning and Unsupervised Learning (Reference: http://dataconomy.com/whats-the-difference-between-supervised-and-unsupervised-learning/)

Unsupervised Learning algorithms are much harder because the data to be fed is unclustered instead of datasets. Here the goal is to have the machine learn on its own without any supervision. The correct solution of any problem is not provided. The algorithm itself finds the patterns in the data. One of the examples of supervised learning is Recommendation engines which are there on all e-commerce sites or also on Facebook friend request suggestion mechanism.

Recommendation Engine

This type of Machine Learning algorithms allows software agents and machines to automatically determine the ideal behaviour within a specific context, to maximise its performance. Reinforcement learning is defined by characterising a learning problem and not by characterising learning methods. Any method which is well suited to solve the problem, we consider it to be the reinforcement learning method. Reinforcement learning assumes that a software agent i.e. a robot, or a computer program or a bot, connect with a dynamic environment to attain a definite goal. This technique selects the action that would give expected output efficiently and rapidly.

Artificial Intelligence and Machine Learning always interests and surprises us with their innovations. AI and Ml have reached industries like Customer Service, E-commerce, Finance and where not. By 2020, 85% of the customer interactions will be managed without a human (Gartner). There are certain implications of AI and ML to incorporate data analysis like Descriptive analytics, Prescriptive analytics and Predictive analytics, discussed in our next blog: How Machine Learning can boost your Predictive Analytics?

Here is the original post:
Artificial intelligence and Machine learning made simple

Can Artificial Intelligence Be Smarter Than a Person …

But the benign examples were just as interesting. In one test of locomotion, a simulated robot was programmed to travel forward as quickly as possible. But instead of building legs and walking, it built itself into a tall tower and fell forward. How is growing tall and falling on your face anything like walking? Well, both cover a horizontal distance pretty quickly. And the AI took its task very, very literally.

According to Janelle Shane, a research scientist who publishes a website about artificial intelligence, there is an eerie genius to this forward-falling strategy. After I had posted [this paper] online, I heard from some biologists who said, Oh yeah, wheat uses this strategy to propagate! she told me. At the end of each season, these tall stalks of wheat fall over, and their seeds land just a little bit farther from where the wheat stalk heads started.

From the perspective of the computer programmer, the AI failed to walk. But from the perspective of the AI, it rapidly mutated in a simulated environment to discover something which had taken wheat stalks millions of years to learn: Why walk, when you can just fall? A relatable sentiment.

The stories in this paper are not just evidence of the dim-wittedness of artificial intelligence. In fact, they are evidence of the opposite: A divergent intelligence that mimics biology. These anecdotes thus serve as evidence that evolution, whether biological or computational, is inherently creative and should routinely be expected to surprise, delight, and even outwit us, the lead authors write in the conclusion. Sometimes, a machine is more clever than its makers.

This is not to say that AI displays what psychologists would call human creativity. These machines cannot turn themselves on, or become self-motivated, or ask alternate questions, or even explain their discoveries. Without consciousness or comprehension, a creature cannot be truly creative.

But if AI, and machine learning in particular, does not think as a person does, perhaps its more accurate to say it evolves, as an organism can. Consider the familiar two-step of evolution. With mutation, genes diverge from their preexisting structure. With natural selection, organisms converge on the mutation best adapted to their environment. Thus, evolutionary biology displays a divergent and convergent intelligence that is a far better metaphor for to the process of machine learning, like generative design, than the tangle of human thought.

AI might not be smart in a human sense of the word. But it has already shown that it can perform an eerie simulation of evolution. And that is a spooky kind of genius.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.

Read the rest here:
Can Artificial Intelligence Be Smarter Than a Person ...

A.I. Artificial Intelligence – Wikipedia

2001 science fiction film directed by Steven Spielberg

A.I. Artificial Intelligence (also known as A.I.) is a 2001 American science fiction drama film directed by Steven Spielberg. The screenplay by Spielberg and screen story by Ian Watson were loosely based on the 1969 short story "Supertoys Last All Summer Long" by Brian Aldiss. The film was produced by Kathleen Kennedy, Spielberg and Bonnie Curtis. It stars Haley Joel Osment, Jude Law, Frances O'Connor, Brendan Gleeson and William Hurt. Set in a futuristic post-climate change society, A.I. tells the story of David (Osment), a childlike android uniquely programmed with the ability to love.

Development of A.I. originally began with producer-director Stanley Kubrick, after he acquired the rights to Aldiss' story in the early 1970s. Kubrick hired a series of writers until the mid-1990s, including Brian Aldiss, Bob Shaw, Ian Watson, and Sara Maitland. The film languished in protracted development for years, partly because Kubrick felt computer-generated imagery was not advanced enough to create the David character, whom he believed no child actor would convincingly portray. In 1995, Kubrick handed A.I. to Spielberg, but the film did not gain momentum until Kubrick's death in 1999. Spielberg remained close to Watson's film treatment for the screenplay.

The film received positive reviews, and grossed approximately $235 million. The film was nominated for two Academy Awards at the 74th Academy Awards, for Best Visual Effects and Best Original Score (by John Williams).

In a 2016 BBC poll of 177 critics around the world, Steven Spielberg's A.I. Artificial Intelligence was voted the eighty-third-greatest film since 2000.[3] A.I. is dedicated to Stanley Kubrick.

In the 22nd century, rising sea levels from global warming have wiped out coastal cities, reducing the world's population. Mecha, humanoid robots seemingly capable of complex thought but lacking in emotions, have been created. In Madison, New Jersey, David, a prototype Mecha child capable of experiencing love, is given to Henry Swinton and his wife Monica, whose son Martin contracted a rare disease and has been placed in suspended animation. Monica feels uneasy with David, but eventually warms to him and activates his imprinting protocol, causing him to have an enduring childlike love for her. David seeks to have Monica express the same love towards him. David is befriended by Teddy, a robotic teddy bear which belonged to Martin. Martin is unexpectedly cured of his disease and brought home. Martin becomes jealous of David, and taunts him to perform worrisome acts, such as cutting off locks of Monica's hair while she is sleeping. At a pool party, one of Martin's friends pokes David with a knife, triggering his self-protection programming. David grabs onto Martin, and they both fall into the pool. Martin is saved before he drowns, and Henry convinces Monica to return David to his creators to be destroyed. On the way there, Monica has a change of heart and spares David from destruction by leaving him in the woods. With Teddy as his only accompaniment, David recalls The Adventures of Pinocchio and decides to find the Blue Fairy so that she may turn him into a real boy, which he believes will win back Monica's love.

David and Teddy are captured by a Flesh Fair, where obsolete Mecha are destroyed before jeering crowds. David is nearly destroyed himself and pleads for his life. The audience, deceived by David's realistic nature, revolts and allows David to escape alongside Gigolo Joe, a male prostitute Mecha on the run from authorities after being charged with murder. David, Teddy and Joe go to the decadent resort town of Rouge City, where "Dr. Know", a holographic answer engine, directs them to the top of Rockefeller Center in the flooded ruins of Manhattan and has also provided fairy tale information interpreted by David as suggesting that a Blue Fairy has the power to help him. Above the ruins of Manhattan, David meets Professor Hobby, his creator, who tells him that their meeting demonstrates David's ability to love and desire. David finds many copies of himself, including a female variant Darlene, boxed and ready to be shipped. Disheartened by his lost sense of individuality, David attempts suicide by falling from a skyscraper into the ocean. While underwater, David catches sight of a figure resembling the Blue Fairy before Joe rescues him in an amphibious aircraft. Before David can explain, Joe is captured via electromagnet by authorities. David and Teddy take control of the aircraft to see the Fairy, which turns out to be a statue from an attraction on Coney Island. The two become trapped when the Wonder Wheel falls on their vehicle. Believing the Blue Fairy to be real, David asks the statue to turn him into a real boy, and repeats this request until his internal power source is depleted.

Two thousand years later, humans are extinct and Manhattan is buried under glacial ice. The Mecha have evolved into an advanced form called Specialists, and have become interested in learning about humanity. They find and revive David and Teddy. David walks to the frozen Blue Fairy statue, which collapses when he touches it. The Specialists reconstruct the Swinton family home from David's memories and explain to him, via an interactive image of the Blue Fairy, that it is impossible to make David a real boy. However, at David's insistence, they recreate Monica through genetic material from the strand of hair that Teddy kept. This Monica can only live for one day, and the process cannot be repeated. David spends his happiest day with Monica, and as she falls asleep in the evening, she tells David that she has always loved him: "the everlasting moment he had been waiting for", the narrator says. David falls asleep as well and goes to that place "where dreams are born".

Kubrick began development on an adaptation of "Super-Toys Last All Summer Long" in the late 1970s, hiring the story's author, Brian Aldiss, to write a film treatment. In 1985, Kubrick asked Steven Spielberg to direct the film, with Kubrick producing.[6] Warner Bros. agreed to co-finance A.I. and cover distribution duties.[7] The film labored in development hell, and Aldiss was fired by Kubrick over creative differences in 1989.[8] Bob Shaw briefly served as writer, leaving after six weeks due to Kubrick's demanding work schedule, and Ian Watson was hired as the new writer in March 1990. Aldiss later remarked, "Not only did the bastard fire me, he hired my enemy [Watson] instead." Kubrick handed Watson The Adventures of Pinocchio for inspiration, calling A.I. "a picaresque robot version of Pinocchio".[7][9][10]

Three weeks later, Watson gave Kubrick his first story treatment, and concluded his work on A.I. in May 1991 with another treatment of 90 pages. Gigolo Joe was originally conceived as a G.I. Mecha, but Watson suggested changing him to a male prostitute. Kubrick joked, "I guess we lost the kiddie market."[7] Meanwhile, Kubrick dropped A.I. to work on a film adaptation of Wartime Lies, feeling computer animation was not advanced enough to create the David character. However, after the release of Spielberg's Jurassic Park, with its innovative computer-generated imagery, it was announced in November 1993 that production of A.I. would begin in 1994.[11] Dennis Muren and Ned Gorman, who worked on Jurassic Park, became visual effects supervisors,[8] but Kubrick was displeased with their previsualization, and with the expense of hiring Industrial Light & Magic.[12]

"Stanley [Kubrick] showed Steven [Spielberg] 650 drawings which he had, and the script and the story, everything. Stanley said, 'Look, why don't you direct it and I'll produce it.' Steven was almost in shock."

Producer Jan Harlan, on Spielberg's first meeting with Kubrick about A.I.[13]

In early 1994, the film was in pre-production with Christopher "Fangorn" Baker as concept artist, and Sara Maitland assisting on the story, which gave it "a feminist fairy-tale focus".[7] Maitland said that Kubrick never referred to the film as A.I., but as Pinocchio.[12] Chris Cunningham became the new visual effects supervisor. Some of his unproduced work for A.I. can be seen on the DVD, The Work of Director Chris Cunningham.[14] Aside from considering computer animation, Kubrick also had Joseph Mazzello do a screen test for the lead role.[12] Cunningham helped assemble a series of "little robot-type humans" for the David character. "We tried to construct a little boy with a movable rubber face to see whether we could make it look appealing," producer Jan Harlan reflected. "But it was a total failure, it looked awful." Hans Moravec was brought in as a technical consultant.[12]Meanwhile, Kubrick and Harlan thought A.I. would be closer to Steven Spielberg's sensibilities as director.[15][16] Kubrick handed the position to Spielberg in 1995, but Spielberg chose to direct other projects, and convinced Kubrick to remain as director.[13][17] The film was put on hold due to Kubrick's commitment to Eyes Wide Shut (1999).[18] After the filmmaker's death in March 1999, Harlan and Christiane Kubrick approached Spielberg to take over the director's position.[19][20] By November 1999, Spielberg was writing the screenplay based on Watson's 90-page story treatment. It was his first solo screenplay credit since Close Encounters of the Third Kind (1977).[21] Spielberg remained close to Watson's treatment, but removed various sex scenes with Gigolo Joe.[citation needed] Pre-production was briefly halted during February 2000, because Spielberg pondered directing other projects, which were Harry Potter and the Philosopher's Stone, Minority Report, and Memoirs of a Geisha.[18][22] The following month Spielberg announced that A.I. would be his next project, with Minority Report as a follow-up.[23] When he decided to fast track A.I., Spielberg brought Chris Baker back as concept artist.[17] Ian Watson reported that the final script was very faithful to Kubrick's vision, even the ending, which is normally attributed to Spielberg, saying, "The final 20 minutes are pretty close to what I wrote for Stanley, and what Stanley wanted, faithfully filmed by Spielberg without added schmaltz."[24]

The original start date was July 10, 2000,[16] but filming was delayed until August.[25] Aside from a couple of weeks shooting on location in Oxbow Regional Park in Oregon, A.I. was shot entirely using sound stages at Warner Bros. Studios and the Spruce Goose Dome in Long Beach, California.[26]Spielberg copied Kubrick's obsessively secretive approach to filmmaking by refusing to give the complete script to cast and crew, banning press from the set, and making actors sign confidentiality agreements. Social robotics expert Cynthia Breazeal served as technical consultant during production.[16][27] Costume designer Bob Ringwood studied pedestrians on the Las Vegas Strip for his influence on the Rouge City extras.[28]

The film's soundtrack was released by Warner Sunset Records in 2001. The original score was composed and conducted by John Williams and featured singers Lara Fabian on two songs and Josh Groban on one. The film's score also had a limited release as an official "For your consideration Academy Promo", as well as a complete score issue by La-La Land Records in 2015.[29] The band Ministry appears in the film playing the song "What About Us?" (but the song does not appear on the official soundtrack album).

Warner Bros. used an alternate reality game titled The Beast to promote the film. Over forty websites were created by Atomic Pictures in New York City (kept online at Cloudmakers.org) including the website for Cybertronics Corp. There were to be a series of video games for the Xbox video game console that followed the storyline of The Beast, but they went undeveloped. To avoid audiences mistaking A.I. for a family film, no action figures were created, although Hasbro released a talking Teddy following the film's release in June 2001.[16]

A.I. had its premiere at the Venice Film Festival in 2001.[30]

A.I. Artificial Intelligence was released on VHS and DVD in the US by DreamWorks Home Entertainment on March 5, 2002[31][32] in widescreen and full-screen 2-disc special editions featuring an extensive sixteen-part documentary detailing the film's development, production, music and visual effects. The bonus features also included interviews with Haley Joel Osment, Jude Law, Frances O'Connor, Steven Spielberg, and John Williams, two teaser trailers for the film's original theatrical release and an extensive photo gallery featuring production stills and Stanley Kubrick's original storyboards.[33] It was released overseas by Warner Home Video.

The film was first released on Blu-ray Disc in Japan by Warner Home Video on December 22, 2010, followed shortly after with a U.S release by Paramount Home Media Distribution (former owners of the DreamWorks catalog) on April 5, 2011. This Blu-ray featured the film newly remastered in high-definition and incorporated all the bonus features previously included on the 2-disc special-edition DVD.[34]

The film opened in 3,242 theaters in the United States on June 29, 2001, earning $29,352,630 during its opening weekend. A.I went on to gross $78.62 million in US totals as well as $157.31 million in foreign countries, coming to a worldwide total of $235.93 million.[35]

Based on 193 reviews collected by Rotten Tomatoes, 74% of critics gave the film positive notices with a score of 6.6/10. The website's critical consensus reads, "A curious, not always seamless, amalgamation of Kubrick's chilly bleakness and Spielberg's warm-hearted optimism. A.I. is, in a word, fascinating."[36] By comparison, Metacritic collected an average score of 65, based on 32 reviews, which is considered favorable.[37]

Producer Jan Harlan stated that Kubrick "would have applauded" the final film, while Kubrick's widow Christiane also enjoyed A.I.[38] Brian Aldiss admired the film as well: "I thought what an inventive, intriguing, ingenious, involving film this was. There are flaws in it and I suppose I might have a personal quibble but it's so long since I wrote it." Of the film's ending, he wondered how it might have been had Kubrick directed the film: "That is one of the 'ifs' of film historyat least the ending indicates Spielberg adding some sugar to Kubrick's wine. The actual ending is overly sympathetic and moreover rather overtly engineered by a plot device that does not really bear credence. But it's a brilliant piece of film and of course it's a phenomenon because it contains the energies and talents of two brilliant filmmakers."[39] Richard Corliss heavily praised Spielberg's direction, as well as the cast and visual effects.[40]

Roger Ebert gave the film three stars, saying that it was "wonderful and maddening."[41] Ebert later gave the film four stars and added the film to his "Great Movies" list in 2011.[42]Leonard Maltin, on the other hand, gives the film two stars out of four in his Movie Guide, writing: "[The] intriguing story draws us in, thanks in part to Osment's exceptional performance, but takes several wrong turns; ultimately, it just doesn't work. Spielberg rewrote the adaptation Stanley Kubrick commissioned of the Brian Aldiss short story 'Super Toys Last All Summer Long'; [the] result is a curious and uncomfortable hybrid of Kubrick and Spielberg sensibilities." However, he calls John Williams' music score "striking". Jonathan Rosenbaum compared A.I. to Solaris (1972), and praised both "Kubrick for proposing that Spielberg direct the project and Spielberg for doing his utmost to respect Kubrick's intentions while making it a profoundly personal work."[43] Film critic Armond White, of the New York Press, praised the film noting that "each part of David's journey through carnal and sexual universes into the final eschatological devastation becomes as profoundly philosophical and contemplative as anything by cinema's most thoughtful, speculative artists Borzage, Ozu, Demy, Tarkovsky."[44] Filmmaker Billy Wilder hailed A.I. as "the most underrated film of the past few years."[45] When British filmmaker Ken Russell saw the film, he wept during the ending.[46]

Mick LaSalle gave a largely negative review. "A.I. exhibits all its creators' bad traits and none of the good. So we end up with the structureless, meandering, slow-motion endlessness of Kubrick combined with the fuzzy, cuddly mindlessness of Spielberg." Dubbing it Spielberg's "first boring movie", LaSalle also believed the robots at the end of the film were aliens, and compared Gigolo Joe to the "useless" Jar Jar Binks, yet praised Robin Williams for his portrayal of a futuristic Albert Einstein.[47][failed verification] Peter Travers gave a mixed review, concluding "Spielberg cannot live up to Kubrick's darker side of the future." But he still put the film on his top ten list that year for best movies.[48] David Denby in The New Yorker criticized A.I. for not adhering closely to his concept of the Pinocchio character. Spielberg responded to some of the criticisms of the film, stating that many of the "so called sentimental" elements of A.I., including the ending, were in fact Kubrick's and the darker elements were his own.[49] However, Sara Maitland, who worked on the project with Kubrick in the 1990s, claimed that one of the reasons Kubrick never started production on A.I. was because he had a hard time making the ending work.[50] James Berardinelli found the film "consistently involving, with moments of near-brilliance, but far from a masterpiece. In fact, as the long-awaited 'collaboration' of Kubrick and Spielberg, it ranks as something of a disappointment." Of the film's highly debated finale, he claimed, "There is no doubt that the concluding 30 minutes are all Spielberg; the outstanding question is where Kubrick's vision left off and Spielberg's began."[51]

Screenwriter Ian Watson has speculated, "Worldwide, A.I. was very successful (and the 4th-highest earner of the year) but it didn't do quite so well in America, because the film, so I'm told, was too poetical and intellectual in general for American tastes. Plus, quite a few critics in America misunderstood the film, thinking for instance that the Giacometti-style beings in the final 20 minutes were aliens (whereas they were robots of the future who had evolved themselves from the robots in the earlier part of the film) and also thinking that the final 20 minutes were a sentimental addition by Spielberg, whereas those scenes were exactly what I wrote for Stanley and exactly what he wanted, filmed faithfully by Spielberg."[52] [Despite Mr. Watson's reference to worldwide box office of 4th, the movie actually finished 16th worldwide among 2001 movie releases.][53]

In 2002, Spielberg told film critic Joe Leydon that "People pretend to think they know Stanley Kubrick, and think they know me, when most of them don't know either of us". "And what's really funny about that is, all the parts of A.I. that people assume were Stanley's were mine. And all the parts of A.I. that people accuse me of sweetening and softening and sentimentalizing were all Stanley's. The teddy bear was Stanley's. The whole last 20 minutes of the movie was completely Stanley's. The whole first 35, 40 minutes of the filmall the stuff in the housewas word for word, from Stanley's screenplay. This was Stanley's vision." "Eighty percent of the critics got it all mixed up. But I could see why. Because, obviously, I've done a lot of movies where people have cried and have been sentimental. And I've been accused of sentimentalizing hard-core material. But in fact it was Stanley who did the sweetest parts of A.I., not me. I'm the guy who did the dark center of the movie, with the Flesh Fair and everything else. That's why he wanted me to make the movie in the first place. He said, 'This is much closer to your sensibilities than my own.'"[54] He also added: "While there was divisiveness when A.I. came out, I felt that I had achieved Stanleys wishes, or goals."[55]

Upon rewatching the film many years after its release, BBC film critic Mark Kermode apologized to Spielberg in an interview in January 2013 for "getting it wrong" on the film when he first viewed it in 2001. He now believes the film to be Spielberg's "enduring masterpiece".[56]

Visual effects supervisors Dennis Muren, Stan Winston, Michael Lantieri, and Scott Farrar were nominated for the Academy Award for Best Visual Effects, while John Williams was nominated for Best Original Music Score.[57] Steven Spielberg, Jude Law and Williams received nominations at the 59th Golden Globe Awards.[58] A.I. was successful at the Saturn Awards, winning five awards, including Best Science Fiction Film along with Best Writing for Spielberg and Best Performance by a Younger Actor for Osment.[59]

Read more:
A.I. Artificial Intelligence - Wikipedia

Coronavirus puts artificial intelligence to the test – Los Angeles Times

Dr. Albert Hsiao and his colleagues at the UC San Diego health system had been working for 18 months on an artificial intelligence program designed to help doctors identify pneumonia on a chest X-ray. When the coronavirus hit the United States, they decided to see what it could do.

The researchers quickly deployed their program, which dots X-ray images with spots of color where there may be lung damage or other signs of pneumonia. It has now been applied to more than 6,000 chest X-rays, and its providing some value in diagnosis, said Hsiao, the director of UCSDs augmented imaging and artificial intelligence data analytics laboratory.

His team is one of several around the country that has pushed AI programs into the COVID-19 crisis to perform tasks like deciding which patients face the greatest risk of complications and which can be safely channeled into lower-intensity care.

The machine-learning programs scroll through millions of pieces of data to detect patterns that may be hard for clinicians to discern. Yet few of the algorithms have been rigorously tested against standard procedures. So while they often appear helpful, rolling out the programs in the midst of a pandemic could be confusing to doctors and dangerous for patients, some AI experts warn.

AI is being used for things that are questionable right now, said Dr. Eric Topol, director of the Scripps Research Translational Institute and author of several books on health IT.

Newsletter

Get our free Coronavirus Today newsletter

Sign up for the latest news, best stories and what they mean for you, plus answers to your questions.

You may occasionally receive promotional content from the Los Angeles Times.

Topol singled out a system created by Epic, a major vendor of electronic health records software, that predicts which coronavirus patients may become critically ill. Using the tool before it has been validated is pandemic exceptionalism, he said.

Epic said the companys model had been validated with data from more 16,000 hospitalized COVID-19 patients in 21 healthcare organizations. No research on the tool has been published for independent researchers to assess, but in any case, it was developed to help clinicians make treatment decisions and is not a substitute for their judgment, said James Hickman, a software developer on Epics cognitive computing team.

Others see the COVID-19 crisis as an opportunity to learn about the value of AI tools.

My intuition is its a little bit of the good, bad and ugly, said Eric Perakslis, a data science fellow at Duke University and former chief information officer at the Food and Drug Administration. Research in this setting is important.

Nearly $2 billion poured into companies touting advancements in healthcare AI in 2019. Investments in the first quarter of 2020 totaled $635 million, up from $155 million in the first quarter of 2019, according to digital health technology funder Rock Health.

At least three healthcare AI technology companies have made funding deals specific to the COVID-19 crisis, including Vida Diagnostics, an AI-powered lung-imaging analysis company, according to Rock Health.

Overall, AIs implementation in everyday clinical care is less common than hype over the technology would suggest. Yet the coronavirus has inspired some hospital systems to accelerate promising applications.

UCSD sped up its AI imaging project, rolling it out in only two weeks.

Hsiaos project, with research funding from Amazon Web Services, the University of California and the National Science Foundation, runs every chest X-ray taken at its hospital through an AI algorithm. While no data on the implementation has been published yet, doctors report that the tool influences their clinical decision-making about a third of the time, said Dr. Christopher Longhurst, UCSD Healths chief information officer.

The results to date are very encouraging, and were not seeing any unintended consequences, he said. Anecdotally, were feeling like its helpful, not hurtful.

AI has advanced further in imaging than in other areas of clinical medicine because radiological images have tons of data for algorithms to process, and more data makes the programs more effective, Longhurst said.

But while AI specialists have tried to get AI to do things like predict sepsis and acute respiratory distress researchers at Johns Hopkins University recently won a National Science Foundation grant to use it to predict heart damage in COVID-19 patients it has been easier to plug it into less risky areas such as hospital logistics.

In New York City, two major hospital systems are using AI-enabled algorithms to help them decide when and how patients should move into another phase of care or be sent home.

At Mount Sinai Health System, an artificial intelligence algorithm pinpoints which patients might be ready to be discharged from the hospital within 72 hours, said Robbie Freeman, vice president of clinical innovation at Mount Sinai.

Freeman described the AIs suggestion as a conversation starter, meant to help assist clinicians working on patient cases decide what to do. AI isnt making the decisions.

NYU Langone Health has developed a similar AI model. It predicts whether a COVID-19 patient entering the hospital will suffer adverse events within the next four days, said Dr. Yindalon Aphinyanaphongs, who leads NYU Langones predictive analytics team.

The model will be run in a four- to six-week trial with patients randomized into two groups: one whose doctors will receive the alerts, and another whose doctors will not. The algorithm should help doctors generate a list of things that may predict whether patients are at risk for complications after theyre admitted to the hospital, Aphinyanaphongs said.

Some health systems are leery of rolling out a technology that requires clinical validation in the middle of a pandemic. Others say they didnt need AI to deal with the coronavirus.

Stanford Health Care is not using AI to manage hospitalized patients with COVID-19, said Ron Li, the centers medical informatics director for AI clinical integration. The San Francisco Bay Area hasnt seen the expected surge of patients who would have provided the mass of data needed to make sure AI works on a population, he said.

Outside the hospital, AI-enabled risk factor modeling is being used to help health systems track patients who arent infected with the coronavirus but might be susceptible to complications if they contract COVID-19.

At Scripps Health, clinicians are stratifying patients to assess their risk of getting COVID-19 and experiencing severe symptoms using a risk-scoring model that considers factors like age, chronic conditions and recent hospital visits. When a patient scores 7 or higher, a triage nurse reaches out with information about the coronavirus and may schedule an appointment.

Though emergencies provide unique opportunities to try out advanced tools, its essential for health systems to ensure doctors are comfortable with them, and to use the tools cautiously, with extensive testing and validation, Topol said.

When people are in the heat of battle and overstretched, it would be great to have an algorithm to support them, he said. We just have to make sure the algorithm and the AI tool isnt misleading, because lives are at stake here.

Gold writes for Kaiser Health News, an editorially independent program of the Kaiser Family Foundation. It is not affiliated with Kaiser Permanente.

Read the original post:
Coronavirus puts artificial intelligence to the test - Los Angeles Times

How Educators Can Use Artificial Intelligence as a Teaching Tool – Education Week

Getty

Deb Norton spends her days helping teachers in Wisconsins Oshkosh Area school district get more comfortable with technology tools theyre using to engage students. A few years ago, she started seeing increasing mentions of artificial intelligence. Around then, the International Society for Technology in Education asked her to lead a course on the uses of artificial intelligence in the K-12 classroom.

She was initially intrigued when she saw students light up at the mention of artificial intelligence. It soon became clear to her that they were already experiencing AI in their daily lives, with tools like Instagram filters or chatbots on websites. Watching them interact with this content really draws me in, Norton said.

Since then, shes been connecting an increasingly diverse set of educators with the possibilities of AI as a teaching tool. The course includes sections on the definition of artificial intelligence; machine learning; voice experiences and chatbots; and the role of data in AI systems. Attendees include K-12 teachers, administrators, and tech leaders, as well as representatives of technology companies.

Part of her mission has been to communicate that AI isnt newthe term was coined in 1956, and research has been underway for even longer, but now, were starting to use it in our everyday lives, she said.

AI is so strong today that it can create a written paper, a song, a poem, a dance, Norton said. Humans can perceive it as something that was created by a human when, in fact, AI created it on its own.

Heres what she thinks about its potential and the challenges to broader implementation.

Deb Norton

I think the most important thing that people have to realize is that artificial intelligence does encompass more than just a computer that can perform a task. Many people think artificial intelligence is just when my little Alexa Dot over there talks to me or when Netflix makes a recommendation for me. They often think its a task-oriented type of thing. Our goal is often to think of AI beyond just performing tasks to something that is able to make decisions and hold conversations.

Many teachers will put together some type of interactive presentation just to present AI to the class, using real interactive components with the lessons so students are creating some of these cool AI experiments. Tech-coach administrators might present AI to teachers, getting them that knowledge or information through some type of workshop or webinar.

I had a group not long ago that created a lesson about machine learning using AI, and it was all tied to yoga, and how the student could do the yoga pose that could be recognized through machine learning, and then the machine could give them feedback on their yoga poses.

A lot of folks use the idea of how big data drives artificial intelligence. A lot of people go back [after the course] with creating chatbots or voice experiences. If youre working with elementary students, it might be a simple coding site like Scratch where you can create an interactive character or a program for creating an Alexa skill.

AI could become a really big part of virtual learning and at-home learning, but I just dont think were quite there yet. For many of our educators, theyre just dipping their feet into how this would work. Having a virtual tutor is something that is becoming more and more in the conversation of AI, but it is not something I see at this point in time being implemented.

Im seeing little pieces of it globally, thoughsome seniors who were graduating in Japan could participate in their graduation via an AI robot that represents them. Ive seen quite a few articles coming out of other countries on the ability to have a virtual tutor that cannot just spew information at you and test your knowledge but rather learn your way of learning. Were not quite there yet.

With at-home learning, that need will be more prevalent. It will most likely grow quicker than if we didnt have at-home learning.

I think its just both students and teachers knowing how it would work. Some of it is cost. A true chatbot that works on a website costs money. If you want something that will engage and work, thats a funding issue as well.

I think privacy is one of the big barriers. Many districts dont allow schools to open up Alexas and Google Homes because of the privacy of the data thats being collected. One suggestion is to set up a separate network at schools for the use of a smart speaker. Another suggestion is to use the Alexa App on a tablet instead of an actual smart speaker such as an Echo Dot or Google Home speaker. The app can be set up to only listen when you initiate it, unlike a smart speaker that is always listening.

Artificial intelligence can know what would be the best mode of delivering the content and at what pace and how deep. To be able to differentiate for every learner and know every learners strengths and weaknesses, that would be incredible.

I also see the capabilities, from a teacher-educator point of view, to be able to engage and monitor and track the types of lessons and strategies that can be delivered in the most effective way in the classroom. AI could help with that, even if its just as simple as an AI-powered search engine for a teacher in which they are able to search for content in a far deeper way than what we currently can.

Even voice experienceslets say in the future a student had an earbud and a microphone. What if we could ask Alexa something deeper than fact? What if we can ask Alexa, what would be the best way for me to get information on such and such? What would be the best way for me to demonstrate this information to my peers?

Any time I talk about AI, not just in the course but in a webinar or live in person, it is a gamut of people from all walks of everything. We get elementary, middle, and high school teachers. We get professors. We get people who are leading a tech company and developing a product; theyre asking, what can we do with our robots to incorporate AI?

We also get tech directors, a lot of administrators. Sometimes well get a superintendent of a school district. Sometimes, its a person whos not even in education who just wants to learn more about AI.

Web Only

Back to Top

Continued here:
How Educators Can Use Artificial Intelligence as a Teaching Tool - Education Week

Marshaling artificial intelligence in the fight against Covid-19 – MIT News

Artificial intelligencecouldplay adecisiverole in stopping the Covid-19 pandemic. To give the technology a push, the MIT-IBM Watson AI Lab is funding 10 projects at MIT aimed atadvancing AIs transformative potential for society. The research will target the immediate public health and economic challenges of this moment. But it could havealasting impact on how we evaluate and respond to risk long after the crisis has passed. The 10 research projects are highlightedbelow.

Early detection of sepsis in Covid-19 patients

Sepsis is a deadly complication of Covid-19, the disease caused by the new coronavirus SARS-CoV-2. About 10 percent of Covid-19 patients get sick with sepsis within a week of showing symptoms, but only about half survive.

Identifying patients at risk for sepsis can lead to earlier, more aggressive treatment and a better chance of survival. Early detection can also help hospitals prioritize intensive-care resources for their sickest patients. In a project led by MIT ProfessorDaniela Rus, researchers will develop a machine learning system to analyze images of patients white blood cells for signs of an activated immune response against sepsis.

Designing proteins to block SARS-CoV-2

Proteins are the basic building blocks of life, and with AI, researchers can explore and manipulate their structures to address longstanding problems. Take perishable food: The MIT-IBM Watson AI Labrecently used AIto discover that a silk protein made by honeybees could double as a coating for quick-to-rot foods to extend their shelf life.

In a related project led by MIT professorsBenedetto MarelliandMarkus Buehler, researchers will enlist the protein-folding method used in their honeybee-silk discovery to try to defeat the new coronavirus. Their goal is to design proteins able to block the virus from binding to human cells, and to synthesize and test their unique protein creations in the lab.

Saving lives while restarting the U.S. economy

Some states are reopening for business even as questions remain about how to protect those most vulnerable to the coronavirus. In a project led by MIT professorsDaron Acemoglu,Simon JohnsonandAsu Ozdaglarwill model the effects of targeted lockdowns on the economy and public health.

In arecent working paperco-authored by Acemoglu,Victor Chernozhukov, Ivan Werning, and Michael Whinston,MIT economists analyzed the relative risk of infection, hospitalization, and death for different age groups. When they compared uniform lockdown policies against those targeted to protect seniors, they found that a targeted approach could save more lives. Building on this work, researchers will consider how antigen tests and contact tracing apps can further reduce public health risks.

Which materials make the best face masks?

Massachusetts and six other states have ordered residents to wear face masks in public to limit the spread of coronavirus. But apart from the coveted N95 mask, which traps 95 percent of airborne particles 300 nanometers or larger, the effectiveness of many masks remains unclear due to a lack of standardized methods to evaluate them.

In a project led by MIT Associate ProfessorLydia Bourouiba, researchers are developing a rigorous set of methods to measure how well homemade and medical-grade masks do at blocking the tiny droplets of saliva and mucus expelled during normal breathing, coughs, or sneezes. The researchers will test materials worn alone and together, and in a variety of configurations and environmental conditions. Their methods and measurements will determine howwell materials protect mask wearers and the people around them.

Treating Covid-19 with repurposed drugs

As Covid-19s global death toll mounts, researchers are racing to find a cure among already-approved drugs. Machine learning can expedite screening by letting researchers quickly predict if promising candidates can hit their target.

In a project led by MIT Assistant ProfessorRafael Gomez-Bombarelli, researchers will represent molecules in three dimensions to see if this added spatial information can help to identify drugs most likely to be effective against the disease. They will use NASAs Ames and U.S. Department of Energys NSERC supercomputers to further speed the screening process.

A privacy-first approach to automated contact tracing

Smartphone data can help limit the spread of Covid-19 by identifying people who have come into contact with someone infected with the virus, and thus may have caught the infection themselves. But automated contact tracing also carries serious privacy risks.

Incollaborationwith MIT Lincoln Laboratory and others, MIT researchersRonald RivestandDaniel Weitznerwill use encrypted Bluetooth data to ensure personally identifiable information remains anonymous and secure.

Overcoming manufacturing and supply hurdles to provide global access to a coronavirus vaccine

A vaccine against SARS-CoV-2 would be a crucial turning point in the fight against Covid-19. Yet, its potential impact will be determined by the ability to rapidly and equitably distribute billions of doses globally.This is an unprecedented challenge in biomanufacturing.

In a project led by MIT professorsAnthony SinskeyandStacy Springs, researchers will build data-driven statistical models to evaluate tradeoffs in scaling the manufacture and supply of vaccine candidates. Questions include how much production capacity will need to be added, the impact of centralized versus distributed operations, and how to design strategies forfair vaccine distribution. The goal is to give decision-makers the evidenceneededto cost-effectivelyachieveglobalaccess.

Leveraging electronic medical records to find a treatment for Covid-19

Developed as a treatment for Ebola, the anti-viral drug remdesivir is now in clinical trials in the United States as a treatment for Covid-19. Similar efforts to repurpose already-approved drugs to treat or prevent the disease are underway.

In a project led by MIT professorsRoy Welschand Stan Finkelstein, researchers will use statistics, machine learning, and simulated clinical drug trials to find and test already-approved drugs as potential therapeutics against Covid-19. Researchers will sift through millions of electronic health records and medical claims for signals indicating that drugs used to fight chronic conditions like hypertension, diabetes, and gastric influx might also work against Covid-19 and other diseases.

Finding better ways to treat Covid-19 patients on ventilators

Troubled breathing from acute respiratory distress syndrome is one of the complications that brings Covid-19 patients to the ICU. There, life-saving machines help patients breathe by mechanically pumping oxygen into the lungs. But even as towns and cities lower their Covid-19 infections through social distancing, there remains a national shortage of mechanical ventilators and serious health risks of ventilation itself.

In collaboration with IBM researchers Zach Shahn and Daby Sow, MIT researchersLi-Wei LehmanandRoger Markwill develop an AI tool to help doctors find better ventilator settings for Covid-19 patients and decide how long to keep them on a machine. Shortened ventilator use can limit lung damage while freeing up machines for others.To build their models, researchers will draw on data from intensive-care patients with acute respiratory distress syndrome, as well as Covid-19 patients at a local Boston hospital.

Returning to normal via targeted lockdowns, personalized treatments, and mass testing

In a few short months, Covid-19 has devastated towns and cities around the world. Researchers are now piecing together the data to understand how government policies can limit new infections and deaths and how targeted policies might protect the most vulnerable.

In a project led by MIT ProfessorDimitris Bertsimas, researchers will study the effects of lockdowns and other measures meant to reduce new infections and deaths and prevent the health-care system from being swamped. In a second phase of the project, they will develop machine learning models to predict how vulnerable a given patient is to Covid-19, and what personalized treatments might be most effective. They will also develop an inexpensive, spectroscopy-based test for Covid-19 that can deliver results in minutes and pave the way for mass testing. The project will draw on clinical data from four hospitals in the United States and Europe, including Codogno Hospital, which reported Italys first infection.

Read more:
Marshaling artificial intelligence in the fight against Covid-19 - MIT News

Artificial Intelligence in K-12: The Right Mix for Learning or a Bad Idea? – Education Week

Getty

Last year, officials at the Montour school district in western Pennsylvania approached band director Cyndi Mancini with an idea: How about using artificial intelligence to teach music?

Mancini was skeptical.

As soon as I heard AI, I had this panic, she said. All I thought about were these crazy robots that can think for themselves.

There were no robots. Just a web application that uses AI to build original instrumental tracks from a library of prerecorded samples after a user selects a few parameters.

Equipped with Chromebooks, Mancinis students could program mood and genre, manipulate the tempo or key, mute sections, and switch instrument kits with a couple of clicks. And just like that, an original piece is produced instantly.

The AI programdesigned for use by anyone who needs cheap background tunes for media contentenabled Mancini to teach in ways not possible before: Students in an elective course who do not play instruments or read sheet music were now creating their own compositions. For the musically inclined students, Mancini said the software allowed for an even deeper fusion of computer and humantheyd create a track and play over it, combining AI-generated rhythms with live instrumentation.

For me, music is an emotional experience. I know what I put into my playing and teaching of music. For that emotion to come out of an algorithm, I couldn't wrap my head around it at first. How can a computer replicate that? she said. But it can. Im a convert.

While Montour is embracing AI technology with a full-blown bear hug, most school districts are notat least not yet. Some are dabbling with applications. Others arent using AI at all.

And still other educators cant say if their districts are using AI, oftentimes because theyre not familiar enough with the technology to recognize it.

Whether that changes with the nationwide distance learning experiment that happened this spring is still to be seen.

This much, however, is clear: School budgets are going to be devastated from the economic onslaught wrought by the virus, and strapped-for-cash districts could delay tech acquisitions other than the devices and hotspots students need to go online as they prioritize necessities. Still lingering are serious questions about privacy, data bias, and just how effective AI solutions are for education.

The 3,000-student Montour district, in the suburbs of Pittsburgh, is using AI inside and outside the classroom.

The district teaches courses focused on artificial intelligence, ranging from ethics to robotics. It partners with universities and technology companies working on the cutting edge of AI. Theres even a 4-foot tall autonomous robot, a boxy machine that looks like a filing cabinet on wheels, zooming around the hallways of its elementary school delivering packages.

And on the districts backend IT infrastructure, there are dashboards and programs powered by AI providing educators with real-time data about each student, producing metrics that monitor progress and even forecast future success.

When we come back to school next year after the coronavirus, were going to have data on every single kid from their remote learning experience, said Justin Aglio, the director of academic achievement and district innovation at Montour. Not your traditional A,B,C data, either.

Districts, already inundated with trying to keep up, might also shy away from AI tools in the immediate future while teachers and staff adjust to a new digital ecosystem already pushing the boundaries for many.

Its not even on our radar right now, said Andrew McDaniel, the principal of Southwood High School in central Indiana, when asked if hes considering incorporating some of the most basic forms of AI, such as Alexa voice devices, into classrooms. A lot of teachers are looking at what they know works now and sticking to that. Theyre not going to mess around with much that goes beyond that.

Increasingly, though, voice-activated devices such as Alexa, Siri, and Google Home are being used as teaching assistants in classes. Schools are turning to smart thermostats to save money on energy costs and using AI programs to monitor their computer networks. AI is helping districts identify students who are at risk of dropping out, and math tutors and automated essay-scoring systems that have been used for decades now feature more sophisticated AI software than they did in the past.

Until recently, though, most of those tools have relied on simpler AI algorithms that work on a basis of preset rules and conditions.

But a new age of AI-based ed-tech tools are emerging using machine-learning techniques to discover patterns and identify relationships that are not part of their original programming. These systems consistently learn from data collected every time theyre in use and more truly mirror human intelligence.

Ed-tech vendors are pitching advanced statistical AI tools as a way to provide greater personalized learning, tailoring curriculum to a students strengths and weaknesses. Researchers say it is unlikely advanced AI will transform K-12 education, but it can have a positive impact in areas like adaptive instruction, automated essay scoring and feedback, language learning, and online curriculum-recommendation engines.

Most of the startups pioneering education solutions with this type of AI arent yet in a position to offer their products on a mass scale in the United States. Thats because highly accurate advanced AI systems require access to massive data sets to populate and train the machine-learning algorithm to make reliable predictions. Those algorithms must also have access to high-quality data to avoid reinforcing racial, gender, and other biases.

Bill Salak, the chief technology officer for Brainly, an AI-based content generator and homework assistant that uses machine learning, said his company has traditionally worked directly with students, not districts. Now, however, Brainly is diving into more advanced statistical models for its AI to allow for even deeper personalization, and it is planning to eventually start creating products that could go into the classroom.

Salak said that all AI-based technology vendors face an uphill climb because school districts are consistently underfunded, and if theyre going to spend money on a tech tool, it has to be proven to be effective and contributing to academic goals.

The education systems prioritize things that will help them meet their goals, and not many outcomes relate to teaching with new tech, he said. Even if the teacher may see a huge amount of value in something, at the end of the day, that teacher has to have a certain percentage of their kids meeting certain competency standards.

April DeGennaro, a teacher in the gifted program at Peeples Elementary in Fayetteville, Ga., knows firsthand what its like for district administrators to buy into the idea of using AI-tech tools but not backing up that commitment with funding.

DeGennaro runs a lab where students focus on robotics, and her 4th graders use an AI-based robot called Cozmo. Shaped like a mini bulldozer that can fit in your palm, Cozmo uses facial recognition and a so-called emotion engine, allowing it to react to different situations with a humanlike personality by showing a range of emotions, from happy or sad to bored and grumpy. Because of COVID-19-related school closures, the AI robots currently arent being used.

But under normal circumstances, up to four students can use one of the robots at a time with an iPad, coding it to carry out different tasks. At $150 each, DeGennaro said the robots amount to a low-cost investment, but shes had to find her own funding for all seven Cozmo robots in her class.

DeGennaro raised money online, where she got parents to chip in to buy robots. Shes also made it clear to those that know her: For Christmas, for an end-of-the-year gift, or whenever you want to buy Mrs. D a present, buy a robot.

School districts may like things, DeGennaro said, but that doesnt mean they're going to fund them.

At the Saddle Mountain Unified School District in Arizona, a new policy allowing high school teachers to use Alexa or Google Home went into effect this year after a group of district officials and teachers walked through several STEM schools in the Phoenix area and saw the devices being used in classrooms.

Joel Wisser, the technology integration specialist for the 2,300-student district, said teachers walked away impressed, and several decided to incorporate the devices into their daily classroom activities. The district didnt pay for the devices, however. Instead, teachers had to bring their own, and Wisser said he doesnt expect that to change.

One history teacher uses his Alexa as a mini-assistant: reminding him when to return papers to students, answering student and teacher inquiries, providing a Jeopardy-style quiz game, or even playing music set from a time period the class is studying to add ambience to a lesson.

Its really just a personal assistant, a helper, for him. His eyesight is not great. He has a 46-inch computer monitor and hes not a fast typer, said Wisser. Being able to talk to a device is much more efficient for him, so hes not spending time at a keyboard typing in the words 'ancient Greek music.

Everyone didnt welcome the devices at first. The districts technology director, for one, was hesitant because the Alexa was going to be tapped into the districts network, and he wasnt going to have complete control over it, Wisser said.

The voice-activated speakers are also at the center of an ongoing privacy debate since they can record conversations. Wisser said there hadnt been any pushback from parents so far, and class conversations were not recorded.

Christina Gardner-McCune, the director of the Engaging Learning Labs at the University of Florida, said parents, students, and teachers have concerns about what kind of data an Alexa device is collecting in the classroom and what is it doing on the districts network while there. Even though the recording function on an Alexa can be turned off, Gardner-McCune said some districts dont want anything to do with them.

A lot of districts are not allowing those devices in the classroom even though they could have some educational purposes, said Gardner-McCune, who is also a steering committee co-chair of the AI for K-12 Initiative, a national working group of teachers and AI experts focused on jump-starting discussion on how to incorporate AI learning into school curricula.

It will take more time and use of AI devices and tech tools in classrooms before districts become increasingly comfortable with them on a larger scale, she said. And more research is needed showing the benefits of advanced AI systems before districts are willing to pony up for them: For major school districts, said Gardner-McCune, its going to come down to how does it affect test scores.

Back in the Montour district, band director and teacher Mancini said her apprehension about the AI music program vanished when she became familiar with the web application and realized there wasnt going to be a robot in the middle of my room. One of her favorite class exercises using the AI music program involved muting the background music on a movie cliplike the scene where the ship is sinking in Titanicand letting students rework the general vibe by adding their own music.

Music education has been so traditionally taught one way. We play instruments or sing or learn music theory. This is so far from traditional, and Im glad I did it because it was so much fun when I got into it, she said. As teachers, we just need to not be afraid of technology.

Web Only

Back to Top

Follow this link:
Artificial Intelligence in K-12: The Right Mix for Learning or a Bad Idea? - Education Week