2020 Call for Code Global Challenge Finalists Selected for Innovative Solutions to Take on COVID-19 and Climate Change – PRNewswire

ARMONK, N.Y., Sept. 28, 2020 /PRNewswire/ --Call for Code Founding Partner IBM (NYSE: IBM) and Creator David Clark Cause today announced the top five worldwide finalists for the 2020 Call for Code Global Challenge. Call for Code unites hundreds of thousands of developers to create and deploy applications powered by open source technology that can tackle some of the world's biggest challenges. This year, developers around the globe were asked to create solutions to help communities fight back against climate change and COVID-19.

Now in its third year, the Call for Code global competition has generated more than fifteen thousand solutions built using a combination of open source-powered products and technologies, including Red Hat OpenShift, IBM Cloud, IBM Watson, IBM Blockchain, data from The Weather Company, and APIs from ecosystem partners like HERE Technologies and IntelePeer. Since its launch in 2018, this movement has grown to more than 400,000 developers and problem solvers across 179 nations, reflecting the reality that challenges like climate change and COVID-19 demand solutions that work on the local level, but also have the ability to scale and help any community, anywhere.

"This year of crisis underscores the need for the world's developers and business leaders to apply the power of hybrid cloud, AI and open source technology to address society's most pressing issues," said Bob Lord, Senior Vice President, Cognitive Applications, Blockchain, and Ecosystems, IBM. "For the third year in a row, the developer community has answered the Call for Code in overwhelming numbers, creating extraordinary solutions powered by open source technology. As a leader in open source with a long history of driving tech for good, it is incredibly gratifying for us at IBM to see how the broader tech community continues to come together, unified in purpose to make a tangible difference in the lives of so many."

Call for Code Global Top Five

These five finalists were chosen from an elite group of top solutions from each region of the world:

Each year, the Call for Code Global Prize winner receives $200,000 and hands-on support from IBM, The Linux Foundation, and other partners to expand the open source community around their solution and to deploy their solution in areas of need. This year's grand prize winner will be selected by an elite group of judges, including some of the most eminent leaders in human rights, disaster risk reduction, business, and technology.

Path to Deployment

The IBM Service Corps and technical experts helped incubate and deploy the previous two Global Challenge winning solutions. Last year's Call for Code Global Challenge winning team, Prometeo, created a wearable device that measures carbon monoxide, smoke concentration, humidity, and temperature to monitor firefighter safety in real-time as well as to help improve their health outcomes in the long-term. The solution was incubated and completed its first wildfire field test earlier this year during a controlled burn with the Grups de Refor d'Actuacions Forestals (GRAF) and the Grup d'Emergncies Mdiques (GEM) dels Bombers de la Generalitat de Catalunya near Barcelona, Spain. Prometeo was developed by a team comprising a veteran firefighter, an emergency medical nurse, and three developers.

Project Owl, the winning solution from Call for Code 2018, provides an offline communication infrastructure that gives first responders a simple interface for managing all aspects of a disaster. The physical "clusterduck" network is made of hubs that create a mesh network that can send speech-based communications using conversational systems to a central application. Together with the IBM Service Corps, Project Owl has been piloted across Puerto Rico, focusing on areas that were hit hard by hurricanes.

Both projects, as well as others, continue to be incubated through the Call for Code deployment pipeline.

Call for Code University Edition

This year, IBM partnered with the Clinton Global Initiative University (CGI U) to launch a dedicated University Edition within Call for Code. Together, IBM and CGI U reached more than 53,000 students around the world to help create solutions to fight COVID-19 and climate change. The 2020 Call for Code Challenge University finalists are: Kairos App (Latin America); Lupe (Europe); Pandemap (Asia Pacific); Plant-it (North America); and Rechargd (Asia Pacific). Solutions in the University Edition are competing for a grand prize of $10,000. The grand prize-winning team and runner-up will also receive the opportunity to interview for a potential role at IBM.

"This year, we launched the dedicated University Edition within the Call for Code Global Challenge so university students around the world could apply their learnings from the classroom, life experiences and imagination to tackling climate change and COVID-19 in sustainable, equitable and innovative ways," said Chelsea Clinton, Vice Chair, Clinton Foundation. "These finalist solutions are outstanding, and we look forward to announcing a winner on October 13th."

Growing Ecosystem

Call for Code's growth and success is a product of the unique ecosystem that IBM and David Clark Cause have convened to unite the technology development community with humanitarian organizations ensuring that solutions are robust, efficient, innovative, and easy-to-use. This community includes the United Nations Human Rights Office, The Linux Foundation, United Nations Office for Disaster Risk Reduction, Clinton Foundation and Clinton Global Initiative University, Cloud Native Computing Foundation, Verizon, Persistent Systems, Arrow Electronics, HERE Technologies, Ingram Micro, IntelePeer, Consumer Technology Association Foundation, World Bank, Caribbean Girls Hack, Kode With Klossy, World Institute on Disability, and many more.

"We are facing a time of unprecedented crisis," said Laurent Sauveur, Chief, External Relations, UN Human Rights "While the COVID-19 pandemic puts lives and livelihoods at immediate risk, climate change is an existential threat for humanity. By triggering global engagement, initiatives like Call for Code open up the potential for developers and problem solvers around the world to put their skills to use to create inclusive and effective response solutions that can be deployed quickly yet have long-term impact."

The grand prize and University Edition winners will be announced on October 13 via a digital event, the 2020 Call for Code Awards: A Global Celebration of Tech for Good.

About Call for Code Global Challenge

Developers have revolutionized the way people live and interact with virtually everyone and everything. Where most people see challenges, developers see possibilities. That's why David Clark Cause created Call for Code in 2018, and launched it alongside Founding Partner IBM and their partner UN Human rights. This five-year, $30 million global initiative is a rallying cry to developers to use their mastery of the latest technologies to drive positive and long-lasting change across the world through code. Call for Code global winning solutions are further developed, incubated, and deployed as sustainable open source projects to ensure they can drive positive change.

MEDIA CONTACTS

Deirdre Leahy[emailprotected]845.863.4552

Chris Blake[emailprotected]415.613.1120

SOURCE IBM

Continue reading here:

2020 Call for Code Global Challenge Finalists Selected for Innovative Solutions to Take on COVID-19 and Climate Change - PRNewswire

Nasty Instagram vulnerability could have given hackers the keys to the kingdom – TechRadar

After auditing the security of Instagram's apps for Android and iOS, security researchers from Check Point have discovered a critical vulnerability that could be used to perform remote code execution on a victim's smartphone.

The security firm began its investigation into the popular social media app with the aim of examining the 3rd party projects it uses. Many software developers of all sizes utilize open source projects in their software to save time and money. During its security audit of Instagram's apps, Check Point found a vulnerability in the way that the service utilizes the open source project Mozjpeg as its JPEG format decoder for uploading images.

The vulnerability was discovered by fuzzing the open source project. For those unaware, fuzzing involves deliberately placing or injecting garbled data into a specific application or program. If the software fails to properly handle the unexpected data, developers can then identity potential security weaknesses and address them before users are put at risk.

To exploit the vulnerability in Instagram's mobile apps, an attacker would only need to send a potential victim a single, malicious image via email or social media. If this picture is then saved to a user's device, it would trigger the exploitation of the vulnerability once a victim opens the app which would then give an attacker full access to their device for remote takeover.

The vulnerability discovered by Check Point's researchers gives an attacker full control over a user's Instagram app which would allow them to read direct messages, delete or post photos or change a user's account profile details. However, since Instagram has extensive permissions on a user's device, the vulnerability could be used to access their contents, location data, camera and any files stored on their device.

Upon their discovery, the firm's researchers responsibly disclosed their findings to Facebook and the social media giant then described the vulnerability, tracked as CVE-2020-1895, as an Integer Overflow leading to Heap Buffer Overflow. Facebook then issued a patch to address the vulnerability while Check Point waited six months to publish a blog post on its discovery.

Head of cyber research at Check Point, Yaniv Balmas provided further insight on the potential dangers of using 3rd party code, saying:

This research has two main takeaways. First, 3rd party code libraries can be a serious threat. We strongly urge developers of software applications to vet the 3rd party code libraries they use to build their application infrastructures and make sure their integration is done properly. 3rd party code is used in practically every single application out there, and it's very easy to miss out on serious threats embedded in it. Today it's Instagram, tomorrow who knows?

Via SecurityInformed.com

Read the original here:

Nasty Instagram vulnerability could have given hackers the keys to the kingdom - TechRadar

Instagram flaw shows importance of managing third-party apps, images – SC Magazine

A remote code execution (RCE) flaw found in Instagram that lets bad actors potentially take over a victims phone by sending a malicious image shines a spotlight on the vulnerabilities tied to third-party apps and image files.

Researchers from Check Point crashed Mozjpeg, open source software that Instagram uses as a decoder for images uploaded to the photo-sharing service, to exploit CVE-2020-1895, according to a blog post. Although the bug was discovered on an Android device, Check Point said iOS devices are also at risk.

Yaniv Balmas, Check Points head of cyber research, said Instagram made a mistake in how it integrated Mozjpeg into the Instagram app. Balmas said the image parsing code used as a third-party library wound up being the weakest part of the Instagram app, noting that researchers were able to crash it 447 times. Check Point has notified Instagram owner Facebook of the vulnerability and it has since been fixed.

Every modern application uses third-party libraries it would make no sense to develop otherwise, Balmas said. But that doesnt mean you have to blindly trust it. Moving forward, developers need to treat third-party libraries like their own code.

The Synopsys Cybersecurity Research Centre found that open source software makes up on average 70 percent of the code in audited commercial applications, and 99 percent of all applications have some aspect of open source code attached to them.

In the case of the Check Point discovery, development teams must treat images as unvalidated input and test for the effects of corruption, said Tim Mackey, principal security strategist at the Synopsys. He said development teams should treat any abnormal behavior during these tests with the same level of priority given to a SQL injection or other unvalidated input weakness in code.

Open source has many benefits, but carries with it a shared use responsibility, Mackey said. If you are using an open source component, and its critical to the success of your app or business, then you need to manage it properly. One part of that responsibility is to test that your chosen components are securely used in your applications. If there turns out to be an issue, then its your responsibility to report it to the authors, but ideally if youre able to provide a fix do so The security of all software is only as good as the weakest component.

Chris Olson, founder and CEO of The Media Trust, said security pros should consider a CVE discovery at a big platform like Facebook/Instagram a red flag.

The big platforms spend a lot of resources protecting their ecosystems, so if it could happen there, thats significant, Olson said. What I worry about more is that most companies are focused on protecting their own infrastructures and not on the consumers who mostly use third, fourth and fifth parties to run the big platform applications. The vast majority of the cyber attacks are on the third, fourth and fifth-party apps. Its the biggest miss in cyber and too many companies dont even know its an issue.

Tim Erlin, vice president of product management and strategy at Tripwire, was more low-key, saying that theres nothing new about exploitations of third-party libraries. Erlin said the unique vulnerability Check Point uncovered was cause for concern because Instagram has millions of users and organizations such as publishers, corporate marketing departments, ad-networks and radiology labs use thousands of images every day.

My advice to developers is to run a vulnerability scan on all third-party apps theyre using to process images, as well as all third-party apps on the website, Erlin said. They should also do the vulnerability scans on a regular basis. For companies that dont want to slow things down and run the scans, find tools to automate the process.

Read more:

Instagram flaw shows importance of managing third-party apps, images - SC Magazine

Under pressure: Managing the competing demands of development velocity and application security – Security Boulevard

Nearly 50% of development teams knowingly release vulnerable code. Learn why vulnerabilities are overlooked and how you can improve application security.

The first software development team I worked on operated on the follow mantra:

Make it work.Then, make it fast.Then, make it elegant (maybe).

Meaning, dont worry about performance optimizations until your code actually does what its supposed to do, and dont worry about code maintainability until after you know it both works and performs well. Users generally have no idea how maintainable the code is, but they do know if the application is broken or slow. So more often than not, wed never get around to refactoring the codeat least not until the code debt started to impact application reliability and performance.

Today that developer mantra has two additional lines:

Ship it sooner.And while youre at it, make it secure.

As with application performance and reliability, delivering an application on time is easily quantified and observed. Everybody knows when you miss a deadlinesomething thats easy to do when your release cycles are measured in weeks, days, or even hours. But the security of an application isnt so easily observed or quantified, at least not until theres a security breach.

It should come as no surprise, then, that nearly half of the respondents to the modern application development security survey, conducted by Enterprise Strategy Group (ESG), state that their organizations regularly push vulnerable code to production. Its also not surprising that for over half of those teams, tight delivery schedules and critical deadlines are the main contributing factor. In the presence of a deadline, what can be measured is whats going to get done, and what cant be (or at least isnt) measured often doesnt.

However, we dont have time to do it doesnt really cut it when it comes to application security. This is demonstrated by the 60% of respondents who reported that their applications have suffered OWASP Top 10 exploits during the past 12 months. The competing demands of short release cycles and improved application security are a real challenge for development and security teams.

It doesnt have to be this way, and other findings in the survey report point to opportunities that teams have to both maintain development velocity and improve application security. Here are just a few:

Reject silver bullets. Gone are the days of security teams simply running DAST and penetration tests at the end of development. A consistent trend shown in the report is that teams are leveraging multiple types of security testing tools across the SDLC to address different forms of risk in both proprietary and open source code.

Integrate and automate. Software development is increasingly automated, and application security testing needs to be as well. Over half the respondents indicated that their security controls are highly integrated into their DevOps processes, with another 38% saying they are heading down that same path.

Train the team. Most developers lack sufficient application security knowledge to ensure their code isnt vulnerable. Survey respondents indicated that developer knowledge is a challenge, as is consistent training. Without sufficient software security training, developers struggle to address the findings of application security tests. An effective way to remedy this is to provide just-in-time security training delivered through the IDE with a solution like Code Sight.

Keep score. If what gets measured gets done, then its important to measure the progress of both your AppSec testing and security training programs. This includes tracking the introduction and mitigation of security bugs as well as improvements to both of these metrics over time, i.e., who is writing secure code and who isnt, and are they improving?

There are a number of other interesting findings and recommendations in the survey report, and they can help your team manage the competing pressures of release schedules and application security. You can check it out here, and you can also learn more by joining our upcoming webinar, Under Pressure: Building Security Into Application Development, where Ill be interviewing the survey reports author, Dave Gruber, senior analyst at Enterprise Strategy Group.

Under Pressure: Building Security into Application Development

Read more:

Under pressure: Managing the competing demands of development velocity and application security - Security Boulevard

Security professional launches a community-based website with open-sourced training programs dedicated to helping others in the industry – Security…

Security professional launches a community-based website with open-sourced training programs dedicated to helping others in the industry | 2020-09-28 | Security Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

The rest is here:

Security professional launches a community-based website with open-sourced training programs dedicated to helping others in the industry - Security...

W3C Drops WordPress from Consideration for Redesign, Narrows CMS Shortlist to Statamic and Craft – WP Tavern

The World Wide Web Consortium (W3C), the international standards organization for the web,is redesigning its website and will soon be selecting a new CMS. Although WordPress isalready used to manage W3Cs blog and news sections of the website, the organization is open to adopting a new CMS to meet its list of preferences and requirements.

Studio 24, the digital agency selected for the redesign project, narrowed their consideration to three CMS candidates:

Studio 24 was aiming to finalize their recommendations in July but found that none of them complied with the W3Cs authoring tool accessibility guidelines. The CMSs that were better at compliance with the guidelines were not as well suited to the other project requirements.

In the most recent project update posted to the site, Studio 24 reported they haveshortlisted two CMS platforms. Coralie Mercier, Head of Marketing and Communications at W3C, confirmed that these include Statamic and Craft CMS.

WordPress was not submitted to the same review process as the Studio 24 team claims to have extensive experience working with it. In the summary of their concerns, Studio 24 cited Gutenberg, accessibility issues, and the fact that the Classic Editor plugin will stop being officially maintained on December 31st, 2021:

First of all, we have concerns about the longevity of WordPressas we use it. WordPress released a new version of their editor in 2018: Gutenberg. We have already rejected the use of Gutenberg in the context of this project due to accessibility issues.

If we choose to do away with Gutenberg now, we cannot go back to it at a later date. This would amount to starting from scratch with the whole CMS setup and theming.

Gutenberg is the future of WordPress. The WordPress core development team keeps pushing it forward and wants to roll it out to all areas of the content management system (navigation, sidebar, options etc.) as opposed to limiting its use to the main content editor as is currently the case.

This means that if we want to use WordPress long term, we will need to circumvent Gutenberg and keep circumventing it for a long time and in more areas of the CMS as time goes by.

Another major factor in the decision to remove WordPress from consideration was that they found no elegant solution to content localization and translation.

Studio 24 also expressed concerns that tools like ACF, Fewbricks, and other plugins might not being maintained for the Classic Editor experience in the context of a widespread adoption of Gutenberg by users and developers.

More generally, we think this push to expand Gutenberg is an indication of WordPress focusing on the requirements of their non-technical user base as opposed to their audience of web developers building custom solutions for their clients.

It seems that the digital agency W3C selected for the project is less optimistic about the future of Gutenberg and may not have reviewed recent improvements to the overall editing experience since 2018, including those related to accessibility.

Accessibility consultant and WordPress contributor Joe Dolson recently gave an update on Gutenberg accessibility audit at WPCampus 2020 Online. He reported that while there are still challenges remaining, many issues raised in the audit have been addressed across the whole interface and 2/3 of them have been solved. Overall accessibility of Gutenberg is vastly improved today over what it was at release, Dolson said.

Unfortunately, Studio 24 didnt put WordPress through the same content creation and accessibility tests that it used for Statamic and Craft CMS. This may be because they had already planned to use a Classic Editor implementation and didnt see the necessity of putting Gutenberg through the paces.

These tests involved creating pages with flexible components which they referred to as blocks of layout, for things like titles, WYSIWYG text input, and videos. It also involved creating a template for news items where all the content input by the user would be displayed (without formatting).

Gutenberg would lend itself well to these uses cases but was not formally tested with the other candidates, due to the team citing their extensive experience with WordPress. I would like to see the W3C team revisit Gutenberg for a fair shake against the proprietary CMSs.

The document outlining the CMS requirements for the project states that W3C has a strong preference for an open-source license for the CMS platform as well as a CMS that is long-lived and easy to maintain. This preference may be due to the economic benefits of using a stable, widely adopted CMS, or it may be inspired by the undeniable symbiosis between open source and open standards.

The industry has learned by experience that the only software-related standards to fully achieve [their] goals are those which not only permit but encourage open source implementations. Open source implementations are a quality and honesty check for any open standard that might be implemented in software

WordPress is the only one of the three original candidates to be distributed under anOSD-compliant license.(CMS code available on GitHub isnt the same.)

Using proprietary software to publish the open standards that underpin the web isnt a good look. While proprietary software makers are certainly capable of implementing open standards, regardless of licensing, there are a myriad of benefits for open standards in the context of open source usage:

The community of participants working with OSS may promote open debate resulting in an increased recognition of the benefits of various solutions and such debate may accelerate the adoption of solutions that are popular among the OSS participants. These characteristics of OSS support evolution of robust solutions are often a significant boost to the market adoption of open standards, in addition to the customer-driven incentives for interoperability and open standards.

Although both Craft CMS and Statamic have their code bases available on GitHub, they share similarly restrictive licensing models. The Craft CMS contributing document states:

Craft isnt FOSSLets get one thing out of the way: Craft CMS is proprietary software. Everything in this repo, including community-contributed code, is the property of Pixel & Tonic.

That comes with some limitations on what you can do with the code:

You cant change anything related to licensing, purchasing, edition/feature-targeting, or anything else that could mess with our alcohol budget. You cant publicly maintain a long-term fork of Craft. There is only One True Craft.

Statamics contributing docs have similar restrictions:

Statamic is not Free Open Source Software. It is proprietary. Everything in this and our other repos on Github including community-contributed code is the property of Wilderborn. For that reason there are a few limitations on how you can use the code:

Projects with this kind of restrictive licensing often fail to attract much contribution or adoption, because the freedoms are not clear.

In a GitHub issue requesting Craft CMS go open source, Craft founder and CEO Brandon Kelly said, Craft isnt closedsourceall the source code is right here on GitHub, and claims the license is relatively unrestrictive as far as proprietary software goes, that contributing functions in a similar way to FOSS projects. This rationale is not convincing enough for some developers commenting on the thread.

I am a little hesitant to recommend Craft with a custom open source license, Frank Anderson said. Even if this was a MIT+ license that added the license and payment, much like React used to have. I am hesitant because the standard open source licenses have been tested.

When asked about the licensing concerns of Studio 24 narrowing its candidates to two proprietary software options, Coralie Mercier told me, we are prioritizing accessibility. A recent project update also reports that both CMS suppliers W3C is reviewing have engaged positively with authoring tool accessibility needs and have made progress in this area.

Even if you have cooperative teams at proprietary CMSs that are working on accessibility improvements as the result of this high profile client, it cannot compare to the massive community of contributors that OSD-compliant licensing enables.

Its unfortunate that the state of open source CMS accessibility has forced the organization to narrow its selections to proprietary software options for its first redesign in more than a decade.

Open standards go hand in hand with open source. There is a mutually beneficial connection between the two that has caused the web to flourish. I dont see using a proprietary CMS as an extension of W3C values, and its not clear how much more benefit to accessibility the proprietary options offer in comparison. W3C may be neutral on licensing debates, but in the spirit of openness, I think the organization should adopt an open source CMS, even if it is not WordPress.

Like Loading...

Original post:

W3C Drops WordPress from Consideration for Redesign, Narrows CMS Shortlist to Statamic and Craft - WP Tavern

The push for content moderation legislation around the world – Brookings Institution

The summer of 2020 was very consequential for online speech. After years of national debate in the United States, several reform initiatives around the world, and the added pressure of the global pandemic, the demand for policy action finally boiled over. We are witnessing a shift in the primary driver of regulation from protecting innovation at all costs to ostensibly protecting aggrieved citizens at all cost. The U.S., Europe, and Brazil are in the throes of a fundamental intermediary liability legislative fight: who deserves safeguarding, what are the major threats, and can government rewrite the rules without pulling the plug on the internet as we know it? Lets review what the period of debate is shaping up across the world and what it means for government action.

In May 2020, France passed the Fighting hate on the Internet law, built in the image of Germanys much-maligned 2017 Network Enforcement Act (NetzDG) Law, one of the most stringent intermediary liability legislations on the European continent. The law requires social network companies to almost instantly take down material deemed obviously illegal, at risk of heavy fines and without judicial decision-making safeguards. After its passage, the French Constitutional Court struck it down, as it found it to be an attack on freedom of expression among many other concerns. Meanwhile, in June, Germany decided that NetzDG was not enough; it introduced and passed reform in the Bundestag. The new law commands social media platforms to not just take down violent hate speech, but also report it to the police.

Also in June 2020, Brazil passed, in one of its legislative chambers, a bill fighting fake news, Brazilian Law of Freedom, Liability, and Transparency on the Internet whose initial drafts also mirrored the original NetzDG text. The final version, not without controversy, tackled intermediary liability by only requiring mandatory transparency reports, political content disclosure, and ensuring due process and appeals for content moderation decisions.

Similarly, in the U.S., The Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2019 (EARN IT) has been hotly contested not just on content moderation but also on potentially breaking strong encryption. The bill had an entirely different initial draft to the one that passed its congressional committee vote in July 2020. Originally, it changed the liability standard for platforms from actual knowledge of sexual abuse or exploitation materials related to children to the mere existence of such material. The proposed bill would also create a 19 member national commission, chaired by the attorney general, charged with creating a set of mandatory best practices for intermediaries to follow or else lose their liability protection. Ultimately, the version that passed a committee vote scrapped the change of standard and made the best practices optional, while adding in a questionable carve out of Section 230 for state laws against child sexual abuse materials.

The build-up to the bills highlights some general trends. Germanys bill suffered significant pushback, but did not originate nor go through a public fact-finding commission. On the other hand, France and Brazil had set up committees to understand the problem of content moderation and the entire suite of potential solutions. The French government backed down after its original draft bill was panned not just for damage to freedom of speech and potential harms to disadvantaged groups, but also its failure in fighting hate, disinformation and other unsavory online content. It seemingly settled into a longer, more thorough process, through a nuanced and well researched executive branch commission report.

Similar to France, by the end of 2019 the Brazilian National Congress created an ad-hoc misinformation investigative committee. Unlike France, the committee was not able to even hold hearings with representatives of social media platforms let alone issue a report before the pandemic hit. The nature of the pandemic shifted priorities for both countries. In France it meant rushing the bill through under the cover of national security despite the nuanced perspective of the report. In Brazil it meant no report, and an introduction of a bill that got a series of online public hearings and an entirely revised text after strong pushback.

While no external committee was even suggested, the trajectory of the EARN IT Act is similar to Brazils fake news bill: an initial draft, universally criticized, is introduced, stakeholders rush in to explain its potential damage, and the version that passes the first vote is materially different and watered down while barely addressing earlier criticisms.

Unlike the others, the eminently bureaucratic and consultative nature of the European Union lends itself to a long and overly thorough process as it attempts to reform its decades old eCommerce Directive through the Digital Services Act. Incidentally, the bill is the only one whose text is not available before the global consultations wrap up. However, the general trend is worrisome: All the legislation discussed so far started from the premise that something had to be done and the NetzDG censorship model was the best. Lawmakers would have largely followed this model if left to their own devices and unencumbered by open debate or impartial fact-finding: Until December 2019 13 countries approved laws in the spirit, if not also the letter of NetzDG. The most recent one, Turkey, is billed as the strictest. As a harbinger of potential future global reforms, NetzDG itself is getting stricter.

Vigilance across stakeholder groups has so far led to meaningful if limited success in changing the free speech- and privacy-encroaching regulations across the world, which may be enough to send a strong message to the drafters of the EUs Digital Services Act. While France and Germany have passed legislation, the Brazilian and U.S. bills are still uncertain. EARN IT Act drafters, specifically Senator Lindsey Graham (R-SC), were hoping to pass the legislation in the Senate before the August recess. The Brazilian bills status is unclear, awaiting discussion and passage in the countrys other chamber, but with mounting national and international criticism, there may still be hope for positive change.

High profile bills get attention and resultant national and international pushback, but it is worrisome that the default intermediary liability legislation seems to be the draconian NetzDG, or underdeveloped concepts like duty of care. With some sense of what the Digital Services Act might contain, it is the only bill that is not solving for a perceived immediate problem like disinformation, child sexual abuse, or hate speech, without regard to the potential aftermath.

But speaking more generally, the 2020 bills mark a change of mindset from the innovation and freedom of expression that catalyzed the original legislation now being marked for reform. Now besieged by disinformation, harassment, and threats of violence or deplatforming, users have demanded new legislation to protect not just themselves, but the platforms they paradoxically hold as both integral to and infringing on their fundamental rights. The do something ethos behind the reform bills is a direct answer to this phenomenon. However, replacing the myopic view of moderation as mostly inconsequential with the equally myopic view of forced moderation regardless of larger systemic implications will not make us any less blind.

View original post here:

The push for content moderation legislation around the world - Brookings Institution

Inside the Army’s futuristic test of its battlefield artificial intelligence in the desert – C4ISRNet

YUMA PROVING GROUND, Ariz. After weeks of work in the oppressive Arizona desert heat, the U.S. Army carried out a series of live fire engagements Sept. 23 at Yuma Proving Ground to show how artificial intelligence systems can work together to automatically detect threats, deliver targeting data and recommend weapons responses at blazing speeds.

Set in the year 2035, the engagements were the culmination of Project Convergence 2020, the first in a series of annual demonstrations utilizing next generation AI, network and software capabilities to show how the Army wants to fight in the future.

The Army was able to use a chain of artificial intelligence, software platforms and autonomous systems to take sensor data from all domains, transform it into targeting information, and select the best weapon system to respond to any given threat in just seconds.

Army officials claimed that these AI and autonomous capabilities have shorted the sensor to shooter timeline the time it takes from when sensor data is collected to when a weapon system is ordered to engaged from 20 minutes to 20 seconds, depending on the quality of the network and the number of hops between where its collected and its destination.

We use artificial intelligence and machine learning in several ways out here, Brigadier General Ross Coffman, director of the Army Futures Commands Next Generation Combat Vehicle Cross-Functional Team, told visiting media.

We used artificial intelligence to autonomously conduct ground reconnaissance, employ sensors and then passed that information back. We used artificial intelligence and aided target recognition and machine learning to train algorithms on identification of various types of enemy forces. So, it was prevalent throughout the last six weeks.

Promethean Fire

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

The first exercise featured is informative of how the Army stacked together AI capabilities to automate the sensor to shooter pipeline. In that example, the Army used space-based sensors operating in low Earth orbit to take images of the battleground. Those images were downlinked to a TITAN ground station surrogate located at Joint Base Lewis McCord in Washington, where they were processed and fused by a new system called Prometheus.

Currently under development, Prometheus is an AI system that takes the sensor data ingested by TITAN, fuses it, and identifies targets. The Army received its first Prometheus capability in 2019, although its targeting accuracy is still improving, according to one Army official at Project Convergence. In some engagements, operators were able to send in a drone to confirm potential threats identified by Prometheus.

From there, the targeting data was delivered to a Tactical Assault Kit a software program that gives operators an overhead view of the battlefield populated with both blue and red forces. As new threats are identified by Prometheus or other systems, that data is automatically entered into the program to show users their location. Specific images and live feeds can be pulled up in the environment as needed.

All of that takes place in just seconds.

Once the Army has its target, it needs to determine the best response. Enter the real star of the show: the FIRES Synchronization to Optimize Responses in Multi-Domain Operations, or FIRESTORM.

What is FIRESTORM? Simply put its a computer brain that recommends the best shooter, updates the common operating picture with the current enemy situation, and friendly situation, admissions the effectors that we want to eradicate the enemy on the battlefield, said Coffman.

Army leaders were effusive in praising FIRESTORM throughout Project Convergence. The AI system works within the Tactical Assault Kit. Once new threats are entered into the program, FIRESTORM processes the terrain, available weapons, proximity, number of other threats and more to determine what the best firing system to respond to that given threat. Operators can assess and follow through with the systems recommendations with just a few clicks of the mouse, sending orders to soldiers or weapons systems within seconds of identifying a threat.

Just as important, FIRESTORM provides critical target deconfliction, ensuring that multiple weapons systems arent redundantly firing on the same threat. Right now, that sort of deconfliction would have to take place over a phone call between operators. FIRESTORM speeds up that process and eliminates any potential misunderstandings.

In that first engagement, FIRESTORM recommended the use of an Extended-Range Cannon Artillery. Operators approved the algorithms choice, and promptly the cannon fired a projectile at the target located 40 kilometers away. The process from identifying the target to sending those orders happened faster than it took the projectile to reach the target.

Perhaps most surprising is how quickly FIRESTORM was integrated into Project Convergence.

This computer program has been worked on in New Jersey for a couple years. Its not a program of record. This is something that they brought to my attention in July of last year, but it needed a little bit of work. So we put effort, we put scientists and we put some money against it, said Coffman. The way we used it is as enemy targets were identified on the battlefield FIRESTORM quickly paired those targets with the best shooter in position to put effects on it. This is happening faster than any human could execute. It is absolutely an amazing technology.

Dead Center

Prometheus and FIRESTORM werent the only AI capabilities on display at Project Convergence.

In other scenarios, a MQ-1C Gray Eagle drone was able to identify and target a threat using the on-board Dead Center payload. With Dead Center, the Gray Eagle was able to process the sensor data it was collecting, identifying a threat on its own without having to send the raw data back to a command post for processing and target identification. The drone was also equipped with the Maven Smart System and Algorithmic Inference Platform, a product created by Project Maven, a major Department of Defense effort to use AI for processing full motion video.

According to one Army officer, the capabilities of the Maven Smart System and Dead Center overlap, but placing both on the modified Gray Eagle at Project Convergence helped them to see how they compared.

With all of the AI engagements, the Army ensured there was a human in the loop to provide oversight of the algorithms' recommendations. When asked how the Army was implementing the Department of Defenses principles of ethical AI use adopted earlier this year, Coffman pointed to the human barrier between AI systems and lethal decisions.

So obviously the technology exists, to remove the human right the technology exists, but the United States Army, an ethical based organization thats not going to remove a human from the loop to make decisions of life or death on the battlefield, right? We understand that, explained Coffman. The artificial intelligence identified geo-located enemy targets. A human then said, Yes, we want to shoot at that target.

Originally posted here:
Inside the Army's futuristic test of its battlefield artificial intelligence in the desert - C4ISRNet

Artificial intelligence: threats and opportunities | News – EU News

The increasing reliance on AI systems also poses potential risks.

Underuse of AI is considered as a major threat: missed opportunities for the EU could mean poor implementation of major programmes, such as the EU Green Deal, losing competitive advantage towards other parts of the world, economic stagnation and poorer possibilities for people. Underuse could derive from public and business' mistrust in AI, poor infrastructure, lack of initiative, low investments, or, since AI's machine learning is dependent on data, from fragmented digital markets.

Overuse can also be problematic: investing in AI applications that prove not to be useful or applying AI to tasks for which it is not suited, for example using it to explain complex societal issues.

An important challenge is to determine who is responsible for damage caused by an AI-operated device or service: in an accident involving a self-driving car. Should the damage be covered by the owner, the car manufacturer or the programmer?

If the producer was absolutely free of accountability, there might be no incentive to provide good product or service and it could damage peoples trust in the technology; but regulations could also be too strict and stifle innovation.

The results that AI produces depend on how it is designed and what data it uses. Both design and data can be intentionally or unintentionally biased. For example, some important aspects of an issue might not be programmed into the algorithm or might be programmed to reflect and replicate structural biases. In adcition, the use of numbers to represent complex social reality could make the AI seem factual and precise when it isnt . This is sometimes referred to as mathwashing.

If not done properly, AI could lead to decisions influenced by data on ethnicity, sex, age when hiring or firing, offering loans, or even in criminal proceedings.

AI could severely affect the right to privacy and data protection. It can be for example used in face recognition equipment or for online tracking and profiling of individuals. In addition, AI enables merging pieces of information a person has given into new data, which can lead to results the person would not expect.

It can also present a threat to democracy; AI has already been blamed for creating online echo chambers based on a person's previous online behaviour, displaying only content a person would like, instead of creating an environment for pluralistic, equally accessible and inclusive public debate. It can even be used to create extremely realistic fake video, audio and images, known as deepfakes, which can present financial risks, harm reputation, and challenge decision making. All of this could lead to separation and polarisation in the public sphere and manipulate elections.

AI could also play a role in harming freedom of assembly and protest as it could track and profile individuals linked to certain beliefs or actions.

Use of AI in the workplace is expected to result in the elimination of a large number of jobs. Though AI is also expected to create and make better jobs, education and training will have a crucial role in preventing long-term unemployment and ensure a skilled workforce.

Read more:
Artificial intelligence: threats and opportunities | News - EU News

The Army just conducted a massive test of its battlefield artificial intelligence in the desert – DefenseNews.com

YUMA PROVING GROUND, Ariz. After weeks of work in the oppressive Arizona desert heat, the U.S. Army carried out a series of live fire engagements Sept. 23 at Yuma Proving Ground to show how artificial intelligence systems can work together to automatically detect threats, deliver targeting data and recommend weapons responses at blazing speeds.

Set in the year 2035, the engagements were the culmination of Project Convergence 2020, the first in a series of annual demonstrations utilizing next generation AI, network and software capabilities to show how the Army wants to fight in the future.

The Army was able to use a chain of artificial intelligence, software platforms and autonomous systems to take sensor data from all domains, transform it into targeting information, and select the best weapon system to respond to any given threat in just seconds.

Army officials claimed that these AI and autonomous capabilities have shorted the sensor to shooter timeline the time it takes from when sensor data is collected to when a weapon system is ordered to engaged from 20 minutes to 20 seconds, depending on the quality of the network and the number of hops between where its collected and its destination.

We use artificial intelligence and machine learning in several ways out here, Brigadier General Ross Coffman, director of the Army Futures Commands Next Generation Combat Vehicle Cross-Functional Team, told visiting media.

We used artificial intelligence to autonomously conduct ground reconnaissance, employ sensors and then passed that information back. We used artificial intelligence and aided target recognition and machine learning to train algorithms on identification of various types of enemy forces. So, it was prevalent throughout the last six weeks.

Sign up for our Training & Sim Report Get the latest news in training and simulation technologies

Subscribe

Enter a valid email address (please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Thanks for signing up!

By giving us your email, you are opting in to the Early Bird Brief.

The first exercise featured is informative of how the Army stacked together AI capabilities to automate the sensor to shooter pipeline. In that example, the Army used space-based sensors operating in low Earth orbit to take images of the battleground. Those images were downlinked to a TITAN ground station surrogate located at Joint Base Lewis McCord in Washington, where they were processed and fused by a new system called Prometheus.

Currently under development, Prometheus is an AI system that takes the sensor data ingested by TITAN, fuses it, and identifies targets. The Army received its first Prometheus capability in 2019, although its targeting accuracy is still improving, according to one Army official at Project Convergence. In some engagements, operators were able to send in a drone to confirm potential threats identified by Prometheus.

From there, the targeting data was delivered to a Tactical Assault Kit a software program that gives operators an overhead view of the battlefield populated with both blue and red forces. As new threats are identified by Prometheus or other systems, that data is automatically entered into the program to show users their location. Specific images and live feeds can be pulled up in the environment as needed.

All of that takes place in just seconds.

Once the Army has its target, it needs to determine the best response. Enter the real star of the show: the FIRES Synchronization to Optimize Responses in Multi-Domain Operations, or FIRESTORM.

What is FIRESTORM? Simply put its a computer brain that recommends the best shooter, updates the common operating picture with the current enemy situation, and friendly situation, admissions the effectors that we want to eradicate the enemy on the battlefield, said Coffman.

Army leaders were effusive in praising FIRESTORM throughout Project Convergence. The AI system works within the Tactical Assault Kit. Once new threats are entered into the program, FIRESTORM processes the terrain, available weapons, proximity, number of other threats and more to determine what the best firing system to respond to that given threat. Operators can assess and follow through with the systems recommendations with just a few clicks of the mouse, sending orders to soldiers or weapons systems within seconds of identifying a threat.

Just as important, FIRESTORM provides critical target deconfliction, ensuring that multiple weapons systems arent redundantly firing on the same threat. Right now, that sort of deconfliction would have to take place over a phone call between operators. FIRESTORM speeds up that process and eliminates any potential misunderstandings.

In that first engagement, FIRESTORM recommended the use of an Extended-Range Cannon Artillery. Operators approved the algorithms choice, and promptly the cannon fired a projectile at the target located 40 kilometers away. The process from identifying the target to sending those orders happened faster than it took the projectile to reach the target.

Perhaps most surprising is how quickly FIRESTORM was integrated into Project Convergence.

This computer program has been worked on in New Jersey for a couple years. Its not a program of record. This is something that they brought to my attention in July of last year, but it needed a little bit of work. So we put effort, we put scientists and we put some money against it, said Coffman. The way we used it is as enemy targets were identified on the battlefield FIRESTORM quickly paired those targets with the best shooter in position to put effects on it. This is happening faster than any human could execute. It is absolutely an amazing technology.

Prometheus and FIRESTORM werent the only AI capabilities on display at Project Convergence.

In other scenarios, a MQ-1C Gray Eagle drone was able to identify and target a threat using the on-board Dead Center payload. With Dead Center, the Gray Eagle was able to process the sensor data it was collecting, identifying a threat on its own without having to send the raw data back to a command post for processing and target identification. The drone was also equipped with the Maven Smart System and Algorithmic Inference Platform, a product created by Project Maven, a major Department of Defense effort to use AI for processing full motion video.

According to one Army officer, the capabilities of the Maven Smart System and Dead Center overlap, but placing both on the modified Gray Eagle at Project Convergence helped them to see how they compared.

With all of the AI engagements, the Army ensured there was a human in the loop to provide oversight of the algorithms' recommendations. When asked how the Army was implementing the Department of Defenses principles of ethical AI use adopted earlier this year, Coffman pointed to the human barrier between AI systems and lethal decisions.

So obviously the technology exists, to remove the human right the technology exists, but the United States Army, an ethical based organization thats not going to remove a human from the loop to make decisions of life or death on the battlefield, right? We understand that, explained Coffman. The artificial intelligence identified geo-located enemy targets. A human then said, Yes, we want to shoot at that target.

The rest is here:
The Army just conducted a massive test of its battlefield artificial intelligence in the desert - DefenseNews.com