Artificial intelligence and the classroom of the future | BrandeisNOW – Brandeis University

By Tessa Venell '08Nov. 19, 2020

Imagine a classroom in the future where teachers are working alongside artificial intelligence partners to ensure no student gets left behind.The AI partners careful monitoring picks up on a student in the back who has been quiet and still for the whole class and the AI partner prompts the teacher to engage the student. When called on, the student asks a question. The teacher clarifies the material that has been presented and every student comes away with a better understanding of the lesson.This is part of a larger vision of future classrooms where human instruction and AI technology interact to improve educational environments and the learning experience.James Pustejovsky, the TJX Feldberg Professor of Computer Science, is working towards that vision with a team led by the University of Colorado Boulder, as part of the new $20 million National Science Foundation-funded AI Institute for Student-AI Teaming.The research will play a critical role in helping ensure the AI agent is a natural partner in the classroom, with language and vision capabilities, allowing it to not only hear what the teacher and each student is saying, but also notice gestures (pointing, shrugs, shaking a head), eye gaze, and facial expressions (student attitudes and emotions).

Pustejovsky took some time to answer questions from BrandeisNOW about his research.

How does your research help build this classroom of the future?For the past five years, we have been working to create a multimodal embodied avatar system, called Diana, that interacts with a human to perform various tasks. She can talk, listen, see, and respond to language and gesture from her human partner, and then perform actions in a 3D simulation environment called VoxWorld. This is work we have been conducting with our collaborators at Colorado State University, led by Ross Beveridge in their vision lab. We are working together again (CSU and Brandeis) to help bring this kind of embodied human computer interaction into the classroom. Nikhil Krishnaswamy, my former Ph.D. student and co-developer of Diana, has joined CSU as part of their team.How does it work in the context of a classroom setting?At first its disembodied, a virtual presence on an iPad, for example, where it is able to recognize the voices of different students. So imagine a classroom: Six to 10 children in grade school. The initial goal in the first year is to have the AI partner passively following the different students, in the way they're talking and interacting, and then eventually the partner will learn to intervene to make sure that everyone is equitably represented and participating in the classroom.Are there other settings that Diana would be useful in besides a classroom?Let's say I've got a Julia Child app on my iPad and I want her to help me make bread. If I start the program on the iPad, the Julia Child avatar would be able to understand my speech. If I have my camera set up, the program allows me to be completely embedded and embodied in a virtual space with her so that she can help me.

Screenshot of the embodied avatar system Diana."

How does she help you?She would look at my table and say, Okay, do you have everything you need. And then Id say, I think so. So the camera will be on, and if you had all your baking materials laid out on your table, she would scan the table. She'd say, I see flour, yeast, salt, and water, but I don't see any utensils: you're going to need a cup, you're going to need a teaspoon. After you had everything you needed, she would tell you to put the flour in that bowl over there. And then she'd show you how to mix it.

Is that where Diana comes in?Yes, Diana is basically becoming an embodied presence in the human-computer interaction: she can see what you're doing, you can see what she's doing. In a classroom interaction, Diana could help with guiding students through lesson plans, through dialogue and gesture, while also monitoring the students progress, mood, and levels of satisfaction or frustration.Does Diana have any uses in virtual learning in education?

Using an AI partner for virtual learning could be a fairly natural interaction. In fact, with a platform such as Zoom, many of the computational issues are actually easier since voice and video tracks of different speakers have already been segmented and identified. Furthermore, in a Hollywood Squares display of all the students, a virtual AI partner may not seem as unnatural, and Diana might more easily integrate with the students online.What stage is the research at now?Within the context of the CU Boulder-led AI Institute, the research has just started. Its a five-year project, and its getting off the ground. This is exciting new research that is starting to answer questions about using our avatar and agent technology with students in the classroom.

The research is funded by the National Science Foundation, and partners with CU Boulder on the research include Brandeis University, Colorado State University, the University of California, Santa Cruz, UC Berkeley, Worcester Polytechnic Institute, Georgia Institute of Technology, University of Illinois at Urbana-Champaign, and University of Wisconsin-Madison.

See the original post here:
Artificial intelligence and the classroom of the future | BrandeisNOW - Brandeis University

Reaping the benefits of artificial intelligence – FoodManufacture.co.uk

Security and food safety

Many factories are introducing smart machinery, taking advantage of the benefits that robots on the production line, connected devices and predicative maintenance offer. However, smart devices require an internet connection and anything with an internet connection can be hacked, potentially leading to data loss or comprising the safety of the final product.

Who bears contractual responsibility in this event? You? The supplier? The AI itself? There is no clear answer to this, but in our view the responsibility for the actions of a smart device will likely lie with the operator.

We eventually see the law automatically placing liability on the supplier in certain circumstances, such as where it fails to ensure that its algorithms are free from bias or discrimination.

Intellectual property and data

The optimisations AI can lead towards are the product of the data that the machine has learnt from. For the AI tool to tell your business how to optimise its production process or reduce its wastage, first you too must hand over valuable information.

The AIs learnings can now be applied to your business but the AI tool now has another data point: yours. And there is nothing to stop its owner going to a competitor of yours and teaching it efficiencies based on its newly expanded (thanks to your business) pool of data.

How can we stop data spreading to competitors?! How do you share the gains fairly between the two organisations? These are issues that must be documented carefully in your contracts as the current law does not yet provide a clear answer.

As AI gets more advanced, it may begin to create new ideas recipes or methods of production requiring less and less human input to do so. Who would own the intellectual property rights in new inventions created solely by AI?

The legal community is still carefully considering the ownership of IP developed by non-humans, but early-adopters of these technologies should be contemplating the ownership question and documenting it in their contracts now.

In the UK, the regulatory issues surrounding AI are still being debated and different bodies have different views. Some believe regulation is urgently needed, whilst others consider that the technology needs to be more widely deployed before rules dictating its use can be drafted. What is clear, though, is the need to take a holistic approach.

The legal implications of AI cannot be looked at in silos for instance only from a data protection perspective or only from an antitrust perspective any regulation of AI must be reviewed as a whole and the risks and benefits carefully weighed.

This is particularly true for the use of AI technologies in the food manufacturing industry where consumer safety is at stake. It may be too early to build laws controlling AI tools used to manufacture consumer goods, but the consequences of AI getting it wrong could be highly damaging and result in the industry rejecting AI completely, despite its many benefits.

Until the law does catch up, make sure to read the small print on security policies, adhere to best practice information management processes and document agreed terms clearly with suppliers.

View post:
Reaping the benefits of artificial intelligence - FoodManufacture.co.uk

Meet GPT-3. It Has Learned to Code (and Blog and Argue). – The New York Times

Before asking GPT-3 to generate new text, you can focus it on particular patterns it may have learned during its training, priming the system for certain tasks. You can feed it descriptions of smartphone apps and the matching Figma code. Or you can show it reams of human dialogue. Then, when you start typing, it will complete the sequence in a more specific way. If you prime it with dialogue, for instance, it will start chatting with you.

It has this emergent quality, said Dario Amodei, vice president for research at OpenAI. It has some ability to recognize the pattern that you gave it and complete the story, give another example.

Previous language models worked in similar ways. But GPT-3 can do things that previous models could not, like write its own computer code. And, perhaps more important, you can prime it for specific tasks using just a few examples, as opposed to the thousands of examples and several hours of additional training required by its predecessors. Researchers call this few-shot learning, and they believe GPT-3 is the first real example of what could be a powerful phenomenon.

It exhibits a capability that no one thought possible, said Ilya Sutskever, OpenAIs chief scientist and a key figure in the rise of artificial intelligence technologies over the past decade. Any layperson can take this model and provide these examples in about five minutes and get useful behavior out of it.

This is both a blessing and a curse.

OpenAI plans to sell access to GPT-3 via the internet, turning it into a widely used commercial product, and this year it made the system available to a limited number of beta testers through their web browsers. Not long after, Jerome Pesenti, who leads the Facebook A.I. lab, called GPT-3 unsafe, pointing to sexist, racist and otherwise toxic language the system generated when asked to discuss women, Black people, Jews and the Holocaust.

With systems like GPT-3, the problem is endemic. Everyday language is inherently biased and often hateful, particularly on the internet. Because GPT-3 learns from such language, it, too, can show bias and hate. And because it learns from internet text that associates atheism with the words cool and correct and that pairs Islam with terrorism, GPT-3 does the same thing.

This may be one reason that OpenAI has shared GPT-3 with only a small number of testers. The lab has built filters that warn that toxic language might be coming, but they are merely Band-Aids placed over a problem that no one quite knows how to solve.

View original post here:
Meet GPT-3. It Has Learned to Code (and Blog and Argue). - The New York Times

Can a Computer Devise a Theory of Everything? – The New York Times

By the times that A.I. comes back and tells you that, then we have reached artificial general intelligence, and you should be very scared or very excited, depending on your point of view, Dr. Tegmark said. The reason Im working on this, honestly, is because what I find most menacing is, if we build super-powerful A.I. and have no clue how it works right?

Dr. Thaler, who directs the new institute at M.I.T., said he was once a skeptic about artificial intelligence but now was an evangelist. He realized that as a physicist he could encode some of his knowledge into the machine, which would then give answers that he could interpret more easily.

That becomes a dialogue between human and machine in a way that becomes more exciting, he said, rather than just having a black box you dont understand making decisions for you.

He added, I dont particularly like calling these techniques artificial intelligence, since that language masks the fact that many A.I. techniques have rigorous underpinnings in mathematics, statistics and computer science.

Yes, he noted, the machine can find much better solutions than he can despite all of his training: But ultimately I still get to decide what concrete goals are worth accomplishing, and I can aim at ever more ambitious targets knowing that, if I can rigorously define my goals in a language the computer understands, then A.I. can deliver powerful solutions.

Recently, Dr. Thaler and his colleagues fed their neural network a trove of data from the Large Hadron Collider, which smashes together protons in search of new particles and forces. Protons, the building blocks of atomic matter, are themselves bags of smaller entities called quarks and gluons. When protons collide, these smaller particles squirt out in jets, along with whatever other exotic particles have coalesced out of the energy of the collision. To better understand this process, he and his team asked the system to distinguish between the quarks and the gluons in the collider data.

We said, Im not going to tell you anything about quantum field theory; Im not going to tell you what a quark or gluon is at a fundamental level, he said. Im just going to say, Heres a mess of data, please separate it into basically two categories. And it can do it.

View post:
Can a Computer Devise a Theory of Everything? - The New York Times

New York City wants to restrict artificial intelligence in hiring – CBS News

New York City is trying to rein in the use of algorithms used to screen job applicants. It's one of the first cities in the U.S. to try to regulate what is an increasingly common and opaque hiring practice.

The city council is considering a bill that would require potential employers to notify job candidates about the use of these tools, referred to as "automated decision systems." Companies would also have to complete an annual audit to make sure the technology doesn't result in bias.

The move comes as the use of artificial intelligence in hiring skyrockets, increasingly replacing human screeners. Fortune 500 companies including Delta, Dunkin, Ikea and Unilever have turned to AI for help assessing job applicants. These tools run the gamut from a simple text reader that screens applications for particular words and phrases, to a system that evaluates videos of potential applicants to judge their suitability for the job.

"We have all the reasons to believe that every major company uses some algorithmic hiring," Julia Stoyanovich, a founding director of the Center for Responsible AI at New York University, said in a recent webinar.

At a time when New Yorkers are suffering double-digit unemployment, legislators are concerned about the brave new world of digital hiring. Research has shown that AI systems can introduce more problems than they solve. Facial-recognition tools that use AI have demonstrated trouble in identifying faces of Black people, determining people's sex and wrongly matching members of Congress to a mugshot database.

In perhaps the most notorious example of AI bias, a hiring tool developed internally at Amazon had to be scrapped because it discriminatedagainst women. The tool was developed using a 10-year history of resumes submitted to the company, whose workforce skews male. As a result, the software effectively "taught" itself that male candidates were preferable and demoted applications that included the word "women," or the names of two all-women's colleges. While the tool was never used, it demonstrates the potential pitfalls of substituting machine intelligence for human judgment.

"As legislators in a city home to some of the world's largest corporations, we must intervene and prevent unjust hiring," city council member Laurie Cumbo, the bill's sponsor, said at a hearing for the legislation last week.

Several civil rights groups say New York's proposed bill doesn't go far enough. A dozen groups including the AI Now Institute, New York Civil Liberties Union and New York Communities for Change issued a letter last week pushing for the law to cover more types of automated tools and more steps in the hiring process. They want the measure to include heavier penalties, enabling people to sue if they've been passed over for a job because of biased algorithms. This would be in line with existing employment law, which allows applicants to sue for discrimination because of race or sex.

"If we pass [the bill] as it is worded today, it will be a rubber stamp for some of the worst forms of algorithmic discrimination," Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, told the city council.

"We need much stronger penalties," he said. "Just as we do with every other form of employment discrimination, we need that private-sector enforcement."

Alicia Mercedes, a spokesperson for Cumbo, the bill's sponsor, said the bill is still in its early stages and is likely to change in response to feedback.

"We're committed to seeing this legislation come out as something that can be effective, so we will of course take any input that we can get from those who are working on these issues every day," Mercedes said.

For hiring professionals, the main appeal of AI is its capacity to save time. But technologists have also touted the potential for automated programs, if used correctly, to eliminate human biases, such as the well-documented tendency for hiring managers to overlook African-American applicants or look favorably on candidates who physically resemble the hiring manager.

"When only a human reviews a resume, unfortunately, humans can't un-see the things that cause unconscious biases if someone went to the same alma mater or grew up in same community," said Athena Karp, CEO of HiredScore, an AI hiring platform.

Karp said she supports the New York bill. "If technologies are used in hiring, the makers of technology, and candidates, can and should know how they're being used," she said at the hearing.

In the U.S., the only place where this is currently the case is in Illinois, whose Biometric Privacy Act requires employers to tell candidates if AI is being used to evaluate them and allows the candidates to opt out. On the federal level, a bill to study bias in algorithms has been introduced in Congress. In New York, most job candidates have no clue they're being screened by software even those who are computer scientists themselves.

"I've received my fair share of job and internship rejections in my graduate and undergraduate careers," said Lauren D'Arinzo, a master's degree candidate in data science and AI at New York University. "It is unsettling to me that a future employer might disregard my application based on the output of an algorithm."

She added, "What worries me most is, had I not been recruited into a project explicitly doing research in this space, I would likely not have even known that these types of tools are regularly used by Fortune 500 companies."

Go here to see the original:
New York City wants to restrict artificial intelligence in hiring - CBS News

Mediavine Expands Access to Contextual Ad Targeting Using GumGum’s Verity Artificial Intelligence Product – PRNewswire

BOCA RATON, Fla., Nov. 24, 2020 /PRNewswire/ --Mediavine, the largest exclusive ad management company in the U.S., has expanded its contextual ad targeting with Magnite and PubMatic using GumGum's Verity artificial intelligence (AI) product. Through Verity, Mediavine is passing contextual segments in the bid request to industry-leading Supply Side Platforms (SSPs). In mid-2019, Mediavine began offering contextual targeting as a way to combat the instability of third-party cookies in the various internet browsers. Due to growing demand from marketers, Mediavine has expanded upon the availability.

"With Safari already blocking third-party cookies and Chrome planning the same in 2021, it is critical we remain proactive in testing every potential way to serve relevant audiences to marketers," said Mediavine SVP of Sales & Revenue Phil Bohn. "Contextual targeting is one of several tools we have implemented, and will continue to implement, to curb the economic impact of these changes. After a successful launch with Verity in 2019 we are expanding the targeting capabilities with other SSPs to meet the demands of our advertiser partners."

Verity relies on contextual data rather than the behavioral data third-party cookies collect. The AI product is designed to provide contextual data and content classification to help publishers serve keyword and contextual categorization, allowing for more robust ad targeting and brand suitability in the era of cookie-less data.

"Verity is one of the industry's most sophisticated contextual targeting technologies to helpadvertisers stay on the right side of evolving privacy regulations," said GumGum CEO Phil Schraeder. "We are proud to say Verity is the only contextual intelligence solution that considers images, text and metadata when scoring relevance and determining suitability, ensuring brand safety and campaign success."

"Contextual data will be increasingly important in a cookie-less internet and we are working with several different approaches to bring context to the forefront," said Magnite Chief Technology Officer Tom Kershaw. "Cooperation across the industry is essential to ensuring success and while this is just the beginning, we see this as a promising start."

About MediavineMediavine is the largest exclusive ad management company in the United States, representing and monetizing more than 8,000 publisher partner websites in addition to its owned and operated properties. Mediavine proudly ranks as a Comscore top five lifestyle property with 130 million unique monthly visitors and 13 billion monthly ad impressions. Additionally, Mediavine is an award-winning Google Certified Publishing Partner, Trustworthy Accountability Group (TAG), Ads.txt and GDPR compliant, and is also a member of the Coalition for Better Ads and Prebid.org.

To learn more about Mediavine, visit http://www.mediavine.com or follow us on Twitter, Facebook, LinkedIn and Instagram.

Contact: Alysha DuffMedia Relations Specialist[emailprotected] (954)-800-5205 ext 013

SOURCE Mediavine

Full-Service Ad Management

See the article here:
Mediavine Expands Access to Contextual Ad Targeting Using GumGum's Verity Artificial Intelligence Product - PRNewswire

Explainable-AI (Artificial Intelligence – XAI) Image Recognition Startup Included in Responsible AI (RAI) Solutions Report, by a Leading Global…

POTOMAC, Md., Nov. 23, 2020 /PRNewswire/ --Z Advanced Computing, Inc. (ZAC), the pioneer Explainable-AI (Artificial Intelligence) software startup, was included by Forrester Research, Inc. in their prestigious report: "New Tech: Responsible AI Solutions, Q4 2020" (November 23, 2020, at https://go.Forrester.com/). Forrester Research is a leading global research and advisory firm, performing syndicated research on technology and business, advising major corporations, governments, investors, and financial sectors. ZAC is the first to demonstrate Cognition-based Explainable-AI (XAI), where various attributes and details of 3D (three dimensional) objects can be recognized from any view or angle. "With our superior algorithms, complex 3D objects can be recognized from any direction, using only a small number of training samples," said Dr. Saied Tadayon, CTO of ZAC. "You cannot do this with the other techniques, such as Deep Convolutional Neural Networks (CNN), even with an extremely large number of training samples. That's basically hitting the limitations of CNNs, which others are using now," continued Dr. Bijan Tadayon, CEO of ZAC. "For complex tasks, such as detailed 3D image recognition, you need ZAC Cognitive algorithms. ZAC also requires less CPU/ GPU and electrical power to run, which is great for mobile or edge computing," emphasized Dr. Saied Tadayon.

View original post here:
Explainable-AI (Artificial Intelligence - XAI) Image Recognition Startup Included in Responsible AI (RAI) Solutions Report, by a Leading Global...

Joint Artificial Intelligence Center Has Substantially Grown To Aid The Warfighter – Department of Defense

It was just two years ago when the Joint Artificial Intelligence Center was created to grab the transformative potential of artificial intelligence technology for the benefit of America's national security, and it has grown substantially from humble beginnings.

Dana Deasy, the Defense Department's chief information officer, and Marine Corps Lt. Gen. Michael Groen, the director of the JAIC, virtually discussed from the Pentagon the growth and goals of JAIC at a FedTalks event during National AI Week.

''One of the things we've wanted to keep in our DNA is this idea that we want to hire a lot of diversity of thought into [JAIC],'' Deasy said, ''but yet do that in a way where that diversity of thought coalesces around a couple of really important themes.''

When JAIC began, it needed to grab hold of some projects that can show people that it can be nimble, agile, and it has the talent to give something that is meaningful back to the Defense Department, he noted.

So JAIC started in a variety of different places, Deasy said. ''But now as we've matured, we really need to focus on what was the core mission for JAIC. And that was, we have to figure out what the role is that AI plays in enabling the warfighter. And I've always said that JAIC should be central to any and all future discussions in that place,'' the CIO said.

''Transformation is our vision,'' Groen said.

''So, it's a big job. We discovered pretty quickly that seeding the environment with lots of small AI projects was not transformational in and of itself. We knew we had to do more. And so, what we're calling JAIC 2.0 is a focused transition in a couple of ways. [For example], we're going to continue to build AI products, because the talent in the JAIC is just superb,'' the JAIC director said.

Groen noted that the JAIC is thinking about solution spaces for a broad base of customers, which really gets it focused.

''There are, you know, the application, and the utilization of AI across the department [that] is very uneven. We have places that are really good. And there, some of the services are just doing fantastic things. And we have some places, large-scale enterprises with fantastic use cases [that] really could use AI, but they don't know where to start. So, we're going to shift from a transformational perspective to start looking at that broad base of customers and enable them,'' he said.

JAIC is going to continue to work with the military services on the cutting edge of AI and AI application, especially in the integration space, where JAIC is bringing together intelligence or intelligence of maneuver, Groen said, ''The warfighting functions have superb stovepipes. But now we need to bring those stovepipes together and integrate them through AI,'' he added.

We have to figure out what the role is that AI plays in enabling the warfighter. And I've always said that JAIC should be central to any and all future discussions in that place.''

The history books of the future will say JAIC was about joint common foundation, Deasy said. ''JAIC could never do all of the AI initiatives with the Department of Defense, nor was it ever created to do that. But what we did say was that people who are going to roll up [their] sleeves, and seriously start trying to leverage AI to help the warfighter every day. at the core of JAIC's success has got to be this joint common foundation,'' he noted.

Deasy noted that the JAIC was powerful and very real.

Into next year, he added, JAIC will have some basic services. And then it's a minimum viable product approach, where JAIC is building some basic services, a lot of native services from cloud providers, but then adding services to that.

''And where we hope to grow the technical platform is a place where people can bring their data, places where we can offer data services, data conditioning, maybe table data labeling and we can start curating data,'' Deasy projected. ''One of the things we'd really like to be able to do for the department is start cataloging and storing algorithms and data. So now we'll have an environment so we can share training data, for example, across programs.''

The modernized software foundation now gives JAIC a platform so it can build AI, Groen said, adding AI has to be a conscious application layer that's applied, leveraging the platform and the things that digital modernization provides.

''But when you think of it that way, holy cow, what a platform to operate from,'' he said.

So now JAIC will really have a have a place where the joint force can effectively operate, he said, adding that the JAIC can now start integrating intel in fires, intel in a maneuver command and control, the logistics enterprise, the combat logistics enterprise and sort of the broad support enterprise, Groen noted.

''You can't do any of that without a platform, and you can't do any of that without those digital modernization tenets,'' the JAIC director said.

If JAIC is going to have the whole force operating at the speed of machines, then it has to start bringing these artificial intelligence applications together into an ecosystem, Groen said, noting that it has to be a trusted ecosystem, meaning "we actually have to know, if we're going to bring data into a capability, we have to know that's good data."

''So how do we build an ecosystem so that we can know the provenance of data, and we can ensure that the algorithms are tested to set in a satisfactory way that we can comfortably and safely integrate data and decision making across warfighting functions,'' the JAIC director asked. ''That's the kind of stuff that I think it's really exciting, because that's the real transformation that we're after.''

Excerpt from:
Joint Artificial Intelligence Center Has Substantially Grown To Aid The Warfighter - Department of Defense

Artificial Intelligence Activity On The Enforcement Front – Technology – Canada – Mondaq News Alerts

Artificial Intelligence ("AI") is clearly on thehorizon of the regulatory landscape. Alongside the use oftechnology to assist with navigating the regulatory process,regulators are now digitizing their enforcement efforts. TheCanadian Securities Administrators ("CSA")1have approached this challenge head-on.

In 2018, the CSA put the capital markets on notice that theywere strengthening their technological capabilities to assist infighting securities misconduct.2 The CSA confirmed theywould rely on AI technology to analyze large data sets, allowingthem to detect misconduct faster and earlier, through the MarketAnalysis Platform ("MAP"), an automated centralizedsolution that the CSA believed could handle the size of the currentmarket practices.

The CSA's 2019 Enforcement Report, released in June 2020,confirmed that it was preparing to launch the MAP in the nearfuture.3 It has been two years since the CSA selectedKx, a division of First Derivatives plc, to build and manage theMAP platform. The Kx platform provides some insight into theCSA's potential uses for the MAP. Kx allows for customreal-time and historical data analysis that features redundancy,alerting, and reporting for stock market analysis and algorithmictrading.4

The role of big data and the ability to process massive datasetswill allow CSA Members to focus limited staff resources onreviewing possible violative conduct in the market. The CSA haveconfirmed that alongside emerging technology, CSA Members willdeploy dedicated teams with established new roles, including datascientists, analysts, and blockchain specialists.5

The allocation of these dedicated teams indicate that CSAMembers are planning on relying more heavily on AI in enforcingregulatory compliance. Although the power of AI is obvious, itseffective use requires experienced staff to examine and evaluatethe output of the advanced analytics.

In November 2018, the Ontario Securities Commission("OSC"), a CSA Member, formed the Burden Reduction TaskForce ("Task Force"). The Task Force focused onimplementing initiatives to keep Ontario's capital marketscompetitive, but also took note of the importance of collectinggood data over more data. In November 2019, the OSC released theTask Force's report.6

Among other things, the Task Force's report considered thepotential use of Regulatory Technology ("RegTech") in theOSC's regulation of Ontario's capital markets. RegTech isthe use of machine-learning software or other technology in themanagement of regulatory processes. It has been seen as a disruptorto the current regulatory landscape and an influencer in themodernization of securities regulation. Ultimately, the TaskForce's report did not recommend incorporating RegTech basedsolutions into the OSC's regulatory enforcement processes.

Where regulatory processes were considered burdensome, the OSCopted to consider technology-based solutions to simplify theregulatory requirements. For example, the OSC was willing toconsider technological alternatives to the delivery of noticesunder sections 11.9 and 11.10 of NI 31-1037 where itcould reduce the burden without compromising the underlyingobjective of that national instrument.8

The Task Force also discussed the expansion of FinTech firms andthe concerns raised with their novel business models. The TaskForce reinforced the use of initiatives such as the OSC Launchpad(discussed below) to assist in navigating compliance with terms andconditions imposed with registration.

In Canada, regulators have created a number of initiativesincluding the OSC Launchpad and the CSA Regulatory Sandbox,allowing registrants to operate innovative business models thatdon't fit the traditional mould.

The two initiatives are distinct. The OSC Launchpad is an arm ofthe OSC; the CSA Regulatory Sandbox is a pilot environment wherefirms can operate in a limited commercial setting - often on atime-limited basis. Some recent firms to use the OSC Launchpad andthe CSA Regulatory Sandbox are TokenGX Inc. ("TokenGX")and Wealthsimple Digital Assets Inc.("Wealthsimple").

In late 2019, TokenGX was granted time-limited exemptive relieffrom the OSC to pilot test a secondary trading marketplace forcrypto-assets (tokens). The trading marketplace is powered byblockchain technology and allows investors to buy and sell privatemarket securities between themselves.9 The OSCrestricted the annual investment limit ($10,000 to $30,000) and hassignificant oversight of the firm while the pilot test isongoing.

Wealthsimple obtained OSC registration in August 2020 (subjectto numerous conditions) to operate a platform where clients couldbuy, hold, and sell crypto-assets. Wealthsimple filed anapplication for time-limited relief from certain registrantobligations, including prospectus and trade reportingrequirements.10 Similar to TokenGX, the OSC has cappedthe amount investors can fund at $30,000.00 per annum.

The approvals of TokenGX and Wealthsimple signal the beginningof a new framework whereby Canadian regulators are able tosimultaneously promote innovation and market confidence whilekeeping Canada competitive in global financial markets.

Footnotes

1 The members are the securities regulatoryauthorities in Alberta, British Columbia, Manitoba, New Brunswick,Nova Scotia, Ontario, Qubec, and Saskatchewan ("CSAMembers").2 CSA, "Canadian SecuritiesRegulators Announce Agreement with Kx to Deliver AdvancedPost-Trade Analysis", September 13, 2018, online: (https://www.securities-administrators.ca/aboutcsa.aspx?id=1735).3 Collaborating to Protect Investorsand Enforce Securities Law, FY2019/2020, at p. 5, online: (http://www.csasanctions.ca/assets/pdf/CSA-Enforcement-Report-English.pdf).4 Kx Platform, online: (https://code.kx.com/platform/).5 Evolving Securities Enforcement fora Digital World, CSA, FY2018/2019, at p. 11, online: (https://www.securities-administrators.ca/uploadedFiles/General/pdfs/CSA-Enforcement-Report_FINAL201819.pdf).6 Reducing Regulatory Burden inOntario's Capital Markets, OSC, 2019, online: (https://www.osc.gov.on.ca/documents/en/20191119_reducing-regulatory-burden-in-ontario-capital-markets.pdf)[Reducing Regulatory Burden].7 NI 31-103, Registration Requirements,Exemptions and Ongoing Registrant Obligations, ss.11.9-11.10.8 Reducing Regulatory Burden atp. 68.9 TokenGX Inc., Re, 41 OSCB8511, online: (https://www.osc.gov.on.ca/documents/en/ord_20191023_tokengx.pdf).10 Wealthsimple Digital Assets Inc.,Re, 43 OSCB 6548, online: (https://www.osc.gov.on.ca/documents/en/ord_20200807_wealthsimple-digital-assets.pdf).

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

The rest is here:
Artificial Intelligence Activity On The Enforcement Front - Technology - Canada - Mondaq News Alerts

Misinformation or artifact: a new way to think about machine learning – Newswise

Newswise Deep neural networks, multilayered systems built to process images and other data through the use of mathematical modeling, are a cornerstone of artificial intelligence.

They are capable of seemingly sophisticated results, but they can also be fooled in ways that range from relatively harmless - misidentifying one animal as another - to potentially deadly if the network guiding a self-driving car misinterprets a stop sign as one indicating it is safe to proceed.

A philosopher with the University of Houston suggests in apaper published inNature Machine Intelligencethat common assumptions about the cause behind these supposed malfunctions may be mistaken, information that is crucial for evaluating the reliability of these networks.

As machine learning and other forms of artificial intelligence become more embedded in society, used in everything from automated teller machines to cybersecurity systems, Cameron Buckner, associate professor of philosophy at UH, said it is critical to understand the source of apparent failures caused by what researchers call "adversarial examples," when a deep neural network system misjudges images or other data when confronted with information outside the training inputs used to build the network. They're rare and are called "adversarial" because they are often created or discovered by another machine learning network - a sort of brinksmanship in the machine learning world between more sophisticated methods to create adversarial examples and more sophisticated methods to detect and avoid them.

"Some of these adversarial events could instead be artifacts, and we need to better know what they are in order to know how reliable these networks are," Buckner said.

In other words, the misfire could be caused by the interaction between what the network is asked to process and the actual patterns involved. That's not quite the same thing as being completely mistaken.

"Understanding the implications of adversarial examples requires exploring a third possibility: that at least some of these patterns are artifacts," Buckner wrote. " ... Thus, there are presently both costs in simply discarding these patterns and dangers in using them naively."

Adversarial events that cause these machine learning systems to make mistakes aren't necessarily caused by intentional malfeasance, but that's where the highest risk comes in.

"It means malicious actors could fool systems that rely on an otherwise reliable network," Buckner said. "That has security applications."

A security system based upon facial recognition technology could be hacked to allow a breach, for example, or decals could be placed on traffic signs that cause self-driving cars to misinterpret the sign, even though they appear harmless to the human observer.

Previous research has found that, counter to previous assumptions, there are some naturally occurring adversarial examples - times when a machine learning system misinterprets data through an unanticipated interaction rather than through an error in the data. They are rare and can be discovered only through the use of artificial intelligence.

But they are real, and Buckner said that suggests the need to rethink how researchers approach the anomalies, or artifacts.

These artifacts haven't been well understood; Buckner offers the analogy of a lens flare in a photograph - a phenomenon that isn't caused by a defect in the camera lens but is instead produced by the interaction of light with the camera.

The lens flare potentially offers useful information - the location of the sun, for example - if you know how to interpret it. That, he said, raises the question of whether adverse events in machine learning that are caused by an artifact also have useful information to offer.

Equally important, Buckner said, is that this new way of thinking about the way in which artifacts can affect deep neural networks suggests a misreading by the network shouldn't be automatically considered evidence that deep learning isn't valid.

"Some of these adversarial events could be artifacts," he said. "We have to know what these artifacts are so we can know how reliable the networks are."

###

Read this article:
Misinformation or artifact: a new way to think about machine learning - Newswise