The future of school may be outdoors, even after the pandemic – CBC.ca

It's a five-minute walk from the nearest road to the wooden sign that announces the site of the Guelph Outdoor School. There, in a clearing in the woods, is a registration table the only visible infrastructure.

On a sunny August weekday inGuelph, Ont., dozens of kids find their way down the path, equipped with hats and bug spray everything they'll need for a full day outdoors. Cohorted into groups of 10, they play games, trek along a series of well-worn paths, study found bird bones, and learn things like how to tell which plant is Queen Anne's lace and which is poisonous water hemlock.

This is summer camp, but the Guelph Outdoor School runs similar programs year-round. In the past the full-day fall and winter programs have been more of a niche attraction, for students aged four to 14 with enough stamina to brave the wilderness in January. Many were homeschooled or had a special arrangement with their regular school to attend once or twice a week.

In 2020, though, with fresh air seen as a way to lower the risk of COVID-19 transmission, more parents are seeing the value in moving their kids outside through programming like this.

"The phone is [ringing] off the hook and I can't even keep track," said Chris Green, a former classroom teacher who started the outdoor school eight years ago.

He and his team have added seven new programs this year, all of which have been filling up. They've also partnered with a local Montessori school to offer a full-time option, where around 30 kids, split into two groups, will spend half the day in a classroom and the other half outdoors.

"For me, it's always made sense to have kids outside," Green said. "And now it makes double the sense, because it has now shifted from an educational and developmental initiative, to a kind of preventative public health initiative."

Even those who were already converts to the school's philosophy are thinking differently about its value.

Cheryl Cadogan's 13-year-old son, David, normally attends programming there one day a week during the school year. But this year, Cadogan said, their family has been on heightened alert since her partner is immunocompromised.

"It's not safe for us as a family to have him go back to school," she said.

David will instead take his Grade 8classes online, while also spending a few days a week at the outdoor school.

Cadogan said she knows there's still a risk, but she is heeding the words of Dr. Anthony Fauci, head of the U.S. National Institute of Allergy and Infectious Diseases, who has said that outdoors is better than indoors.

Indeed, the appeal of open-air activities during the COVID-19 pandemic is rooted in science. Dr. Linsey Marr of Virginia Tech studies how viruses spread through the air. She said COVID-19 transmission by air is happening "there's really no question anymore."

When asked why there's a lower risk of transmission outside, she recommended picturing a smoker. Outside, she said, the exhaled smoke "rapidly disperses throughout the atmosphere and becomes very dilute." Indoors, on the other hand, it gets "trapped."

While masks, physical distancing and proper ventilation can go a long way to help curb the spread of the virus in schools, Dr. Marr said she would seize upon "any opportunity that there is to move an activity outdoors."

The Toronto District School Board (TDSB) is trying to increase those opportunities for its students, encouraging teachers to take classes outside whenever possible this year. But schools that don't have a forest on their property will need to think differently about using the space beyond their doors.

David Hawker-Budlovsky is the Central Coordinating Principal for outdoor education at the TDSB. While it won't be possible for many large downtown schools to have full-day outdoor programming, he said teachers will be able to schedule time in the yard, while staggering entries and exits to maintain physical distance.

Teachers and students will have to get used to "traveling around and using the community as classroom as well," he said. Ideas range from reading aloud to a class in the yard, to teaching about climate change in a nearby ravine, or learning about local history while walking around the neighbourhood.

Hawker-Budlovsky said there will be challenges, and admitted the plan has skeptics. But he's excited about the idea of getting kids outside more often.

"I think what's really important is to be able to look at this [with] an open mind, be creative and be as flexible as possible," he said.

Open-mindedness will certainly be a valuable trait for those holding open-air classes in the Canadian winter. But according to Pamela Gibson, a former teacher who now consults on sustainability and outdoor education with Learning for a Sustainable Future (LSF), students and teachers can get past it.

"There is no bad weather," she said. "There are just bad clothes." Over time, she said, people can learn how to prepare themselves for those less-than-perfect forecasts.

In the early 2000s, as a teacher at Belfountain Public School in Caledon, Ont., Gibson began experimenting with open-air class time. The idea was initially spurred by a group of parents looking for ways for their kids to spend more time outside on the 10-acre property surrounding the school.

At first, she said, "we had the usual kids that hung around the doors and really felt uncomfortable. But as time went on, we [didn't] have those door hangers anymore."

Outdoor learning has become so ingrained there, she said students will sometimes spend two-thirds of their days in the yard or out in the community, working on class projects.

Teachers looking to adopt similar programs elsewhere, she said, will have to be creative. But from the Belfountain experience, even a tree can be looked to as a "possible source of curriculum."

Gibson suggested educators ask themselves, "What's the math in that tree? What's the science in that tree? Where are the arts in that tree?" She believes it's all there.

Holding classes outside in the community is not only possible, Gibson said, but is "crucial," even beyond the pandemic. Curriculum, she said, is "supposed to be what children need to function in the world, not just inside the building [and] not just inside their homes."

With the spectre of COVID-19 pushing educators to look differently at their classrooms, Gibson said, there's "an opportunity for great change," and perhaps even a chance to improve the system for the future.

See more here:
The future of school may be outdoors, even after the pandemic - CBC.ca

All you need to know about the Indian AI Stack – MediaNama.com

A committee under the Department of Telecommunications has released a draft framework of the Indian Artificial Intelligence Stack which seeks to remove the impediments to AI deployment, and essentially proposed to set up a six-layered stack, each handling different functions including consent gathering, storage, and AI/Machine Learning (AI/ML) analytics. Once developed, this stack will be structured across all sectors, including data protection, data minimisation, open algorithm frameworks, defined data structures, trustworthiness and digital rights, and data federation (a single database source for front-end applications), among other things. The paper also said that there is no uniform definition of AI.

This committee AI Standardisation Committee had, in October last year, invited papers on Artificial Intelligence, addressing different aspects of AI such as functional network architecture, AI architecture, and data structures required, among other things. At the time, the DoT had said that as the proliferation of AI increases, there is a need to develop an Indian AI stack so as to bring interoperability, among other things. Here is a summary of the draft Indian AI Stack, comments to which can be emailed at aigroup-dot@gov.in or diradmnap-dot@gov.in, until October 3.

The stack will be made up of five main horizontal layers, and one vertical layer:

This is the root layer of the Indian AI stack over which the entire AI functionality is built. The layer will ensure setting up of a common data controller, and will involve multi-cloud scenarios both private and public clouds. This is where the infrastructure for data collection will be defined. The multilayer cloud services model will define both relations between cloud service models and other functional layers:

This layer will have to define the protocols and interfaces for storing hot data, cold data, and warm data (all three defined below). The paper called this as the most important layer in the stack regardless of size and type of data, since value from data can only be derived once it is processed. And data can only be processed efficiently, when it is stored properly. It is important to store data safely for a very long time while managing all factors of seasonality and trends, ensuring that it is easily accessible and shareable on any device, the paper said.

The paper has created three subcategories of data depending on the relevance of data and its usability:

Categories of data

This layer, through a set of defined protocols and templates ensures an open algorithm framework. The AI/ML process could be Natural Language Processing (NLP), deep learning and neural networks. This layer will also define data analytics that includes data engineering, which focuses on practical applications of data collection and analysis, apart from scaling and data ingestion. The technology mapping and rule execution will also be part of this layer.

The paper acknowledged the need for a proper data protection framework: the Compute layer involves analysis to mine vast troves of personal data and find correlations, which will then be used for various computations. This raises various privacy issues, as well as broader issues of lack of due process, discrimination and consumer protection.

The data so collected can shed light on most aspects of individuals lives. It can also provide information on their interactions and patterns of movement across physical and networked spaces and even on their personalities. The mining of such large troves of data to seek out new correlations creates many potential uses for Big Personal Data. Hence, there is a need to define proper data protection mechanism in this layer along with suitable data encryption and minimisation. from the paper

The compute layer will also define a new way to build and deploy enterprise service-oriented architectures, along with providing transparent computing architecture over which the industry could develop their own analytics. It will have to provide for a distinction between public, shared and private data sources, so that machine learning algorithms can be applied against relevant data fields.

The report also said that the NITI Aayog has proposed an AI specific cloud compute infrastructure which will facilitate research and solution development in using high performance and high throughput AI-specific supercomputing technologies. The broad specifications for this proposed cloud controller architecture may include:

Proposed architecture of AI specific controller

The paper described this as a purpose-built layer through which software and applications can be hosted and executed as a service layer. This layer will also support various backend services for processing of data, and will provide for backend services and a proper service framework for the AI engine to function. It will also keep track of all transaction across the stack, helping in logging auditing activities.

This layer will define the end customer experience through defined data structures and proper interfaces and protocols. It will have to support a proper consent framework for access to data by/for the customer. Provision for consent can be for individual data fields or for collective fields. This layer will also host gateway services. Typically, different tiers of consent will be made available to accommodate different tiers of permissions, the paper said.

This layer also needs to ensure that ethical standards are followed to ensure digital rights. In the absence of a clear data protection law in the country, the EUs General Data Protection Regulation (GDPR) or any of the laws can be applied. This will serve as interim measure until Indian laws are formalised, the paper said.

This layer will ensure the process of security and governance for all the preceding five horizontal layers. There will be an overwhelming flow of data through the stack, which is why there is a need to ensure encryption at different levels, the paper said. This may require setting up the ability for handling multiple queries in an encrypted environment, among other things. Cryptographic support is also an important dimension of the security layer, the paper said.

Why this layer is important, per the paper: data aggregated, transmitted, stored, and used by various stakeholders may increase the potential for discriminatory practices and pose substantial privacy and cybersecurity challenges. The data processed and stored in many cases include geolocation information, product-identifying data, and personal information related to use or owner identity, such as biometric data, health information, or smart-home metrics

Data storage in backend systems can present challenges in protection of data from cyberattacks. In addition to personal-information, privacy concerns, there could be data used in system operation, which may not typically be personal information. Cyber attackers could misuse these data by compromising data availability or changing data, causing data integrity issues, and use big data insights to reinforce or create discriminatory outcomes. When data is not available, causing a system to fail, it can result in damagefor example a smart homes furnace overheats or an individuals medical device cannot function, when required. from the paper

How the proposed AI stack looks like

According to the report, the key benefits of this proposed AI stack are:

This is how the paper proposes data flow through the stack:

Proposed AI flowchart

In AI, the thrust is on how efficiently data is used, the paper said, noting that if the data is garbage then the output will also be so. For example, if programmers or AI trainers transfer their biases to AI; the system will become biased, the paper said. There is a need for evolving ethical standards, trustworthiness, and consent framework to get data validation from users, the paper suggested.

The risks of passive adoption of AI that automates human decision-making are also severe. Such delegation can lead to harmful, unintended consequences, especially when it involves sensitive decisions or tasks and excludes human supervision, the paper said. It gave the example of Microsofts Twitter chatbot Tay as an example of what can happen when garbage data is input into an AI system. Tay had started tweeting racist and misogynist remarks in less than 24 hours of its launch.

Need for openness in AI algorithms: The paper said it was necessary to have an open AI algorithm framework, along with clearly defined data structures. It referenced on how the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) software used by some US courts in predicting the likelihood of recidivism in criminal defendants was demonstrated to be biased since the AI black box was proprietary.

As AI learns to address societal problems, it also develops its own hidden biases. The self learning nature of AI means, the distorted data the AI discovers in search engines, perhaps based upon unconscious and institutional biases, and other prejudices, is codified into a matrix that will make decisions for years to come. In the pursuit of being the best at its task, the AI may make decisions it considers the most effective or efficient for its given objective, but because of the wrong data, it becomes unfair to humans, the report said.

Need to centrally control data: Right after the paper made a pitch for having openness in AI algorithms, it proposed that the data fed into the AI system should be controlled centrally. The data from which the AI learns can itself be flawed or biased, leading to flawed automated AI decisions. This is certainly not the intention of algorithmised decision-making, which is perhaps a good-faith attempt to remove unbridled discretion and its inherent biases. There is thus a need to ensure that the data is centrally controlled including using a single or multiple cloud controllers, the report said.

Proper storage frameworks for AI: An important factor in aiding biases in AI systems is contamination of data, per the paper, which includes, missing information, inconsistent data, or simply errors. This could be because of unstructured storage of data. Thus, there is a need to ensure proper storage frameworks for AI, it said.

Changing the culture of coders and developers: There is a need to change the culture so that coders and developers themselves recognise the harmful and consequential implication of biases, the paper said, adding that this goes beyond standardisation of the type of algorithmic code and focuses on the programmers of the code. Since much coding is outsourced, this would place the onus on the company developing the software product to enforce such standards. Such a comprehensive approach would tackle the problem across the industry as a whole, and enable AI software to make fair decisions made on unbiased data, in a transparent manner, it added.

In the near future, AI will have huge implications on the countrys security, its economic activities and the society. The risks are unpredictable and unprecedented. Therefore, it is imperative for all countries including India to develop a stack that fits into a standard model, which protects customers; users; business establishments and the government.

Economic impact: AI will have a major impact on mainly four sectors, per the paper: manufacturing industries, professional services, financial services, and wholesale and retail. The paper also charted out how AI could be used in some specific sectors. For instance, in healthcare, it said in rural areas, which suffer from limited availability of healthcare professionals and facilities, AI could be used for diagnostics, personalised treatment, early identification of potential pandemics, and imaging diagnostics, among others.

Similarly, in the banking and financial services sector, A can be used for things like development of credit scores through analysis of bank history or social media data, and fraud analytics for proactive monitoring and prevention of various instances of fraud, money laundering, malpractice, and prediction of potential risks, according to the report.

Uses for the government: For governments, for example, cybersecurity attacks can be rectified within hours, rather than months and national spending patterns can be monitored in real-time to instantly gauge inflation levels whilst collecting indirect taxes.

Excerpt from:
All you need to know about the Indian AI Stack - MediaNama.com

Activision edges out Sony and Nintendo in Augusts TV ad spend – VentureBeat

Gaming brands upped their outlay on TV advertising in August by 26.66% compared to July, for an estimated spend of $22.5 million. There was almost a three-way tie for top-spending brands, with Activision edging out longtime chart leader PlayStation. In total, 11 brands aired 43 spots over 5,000 times, resulting in 1.1 billion TV ad impressions. Aside from Nintendo, each of the top brands targeted sports programming, especially NBA and MLB games, for ads during the month.

GamesBeat has partnered with iSpot.tv, the always-on TV ad measurement and attribution platform, to bring you a monthly report on how gaming brands are spending. The results below are for the top five gaming-industry brands in August, ranked by estimated national TV ad spend.

Activision spent an estimated $6.2 million airing a single spot for Call of Duty: Warzone, User Reviews, 627 times, resulting in 215.3 million TV ad impressions. The brand prioritized reaching a sports-loving audience: Top programming by outlay included the NBA, NHL, and MLB, while top networks included TNT, NBC Sports, and Fox.

PlayStation takes second place with an estimated spend of $5.8 million on four ads that ran 754 times, generating 214.7 million TV ad impressions. Most of the spend and impressions occurred in the second half of the month. The spot with the biggest spend (estimated at $3.8 million) was Cannot Be Controlled, promoting the Marvels Avengers game. ESPN, Adult Swim, and Comedy Central were three of the networks with the biggest outlay, while top programming included MLB, NBA, and South Park.

At No. 3: Nintendo, with an estimated spend of $4.9 million on 20 commercials that aired over 1,900 times, resulting in 355.8 million TV ad impressions. The top spot by spend (estimated at $677,351) was Shes My Favorite: Animal Crossing. Programs with the biggest outlay included SpongeBob SquarePants, The Loud House, and The Amazing World of Gumball; top networks included Nick, Cartoon Network, and Bravo.

Fourth place goes to Crystal Dynamics, which hadnt advertised on TV at all this year until August 20. The brand spent an estimated $3.2 million airing two ads, both for the Marvels Avengers game, 397 times, generating 116.6 million TV ad impressions. Its Time to Assemble had the biggest outlay, an estimated $1.8 million. Three of the top programs by spend were the NBA, South Park, and MLB; top networks included ESPN, Adult Swim, and Comedy Central.

Rounding out the ranking is MLB Advanced Media Video Games with an estimated outlay of $825,253 on two spots that aired 323 times, resulting in 49.5 million TV ad impressions. Home Runs, advertising R.B.I. Baseball 20, had the most spend (estimated at $729,251). Most of its outlay went to MLB games, but Ancient Top 10 and Baseball Tonight: Sunday Night Countdown were also in the mix. On the network side of things, the brand prioritized Fox Sports 1, ESPN, and Fox.

For more about iSpots attention and conversion analytics, visit iSpot.tv.

Go here to read the rest:
Activision edges out Sony and Nintendo in Augusts TV ad spend - VentureBeat

Sudbury to hold first drive-in concert – Sherwood Park News

Despite no summer festival in July, the Northern Lights Festival Boral team has been steadily planning new ways to bring live music experiences to Sudbury.

Last week, the organization was thrilled to announce NLFB #49, a diverse and exciting presentation of festival programming in alternative formats.

The long-running music and arts festival has been re-introducing live music in ways that are safe, responsible and fun. This special festival #49 programming culminates in the regions first-ever drive-in concert event, featuring some past festival favourites, as well as a few new faces.

The Sept. 19 concert will include Canadian roots-pop icon Serena Ryder, dynamic songwriter/performer Hawksley Workman, Toronto roots/folk/soul artist Julian Taylor, as well as locals Martine Fortin and Maxwell Jos.

The event will take place in partnership with Horizon Drive-in, at the New Sudbury Centre parking lot (1349 Lasalle Blvd.). Tickets are available online only at nlfb.ca/tickets.

Ryder is an artist adored by fans, peers and critics alike, in part due to her raw and earnest songwriting, and beautifully electric live performances. She has received numerous accolades, including six prestigious Juno Awards, a MuchMusic Video Award for Stompa, and a Canadian Screen Award for Achievement in Music Original Song.

Before her chart-smashing album, Harmony (2013), she also enjoyed success with previous releases, If Your Memory Serves You Well (2007), and Is it O.K. (2009), achieving Gold-selling status.

In 2012, her single, Weak In The Knees, also achieved Gold Certification. Ryders Christmas Kisses was named one of the Top 5 Christmas records of 2018 by Rolling Stone. She has also received the 2018 Margaret Trudeau Mental Health Advocacy Award and has been the face of Bell Lets Talk campaign for multiple years.

A staple of the Canadian arts scene for almost 20 years, Hawksley Workman boasts a catalogue of 15 solo releases showcasing his now signature spectrum of sonic influence, from cabaret to electro-pop to anthemic rock and plenty in between.

The accolades amassed include JUNO nods and wins and widespread critical acclaim. As a producer, his fingerprints grace releases by Juno and Polaris Prize nominees, and winners like Tegan and Sara, Sarah Slean, Serena Ryder, Hey Rosetta!, and Great Big Sea.

Hes also penned melodies with a myriad of artists, from Oscar-award winning Marion Cotillard (La Vie en Rose, Inception) to French rock icon Johnny Hallyday.

Hawksleys touring career has seen him play nearly a thousand shows worldwide. Hes headlined prestigious venues like Massey Hall in Toronto and The Olympia in Paris, and opened for heroes Morrissey, David Bowie, and The Cure.

Julian Taylor doesnt fit in a box. He never has and more power to him. A Toronto music scene staple and a musical chameleon, Taylor is used to shaking it up over the course of 10 albums in the last two decades.

Of West Indian and Mohawk descent, Taylor first made his name as frontman of Staggered Crossing, a Canadian rock radio staple in the early 2000s. These days, however, the soulful singer/guitarist might be on stage one night playing with his eponymous band, spilling out electrified rhythm and blues glory, and the next hell be performing at a folk festival delivering a captivating solo singer-songwriter set.

Martine Fortin is a bilingual singer-songwriter from Sudbury and a past winner of NLFBs annual Meltdown Competition. Her music is a blend of pop, jazz, blues, soul, and rock, combined with intimate, introspective lyrics, and moving piano melodies. She will perform a few of her songs near the start of the evening.

Walking the line between country and folk, Maxwell Joss songs draw from his experience growing up both in the North, on Lake Superior, as well as in southern Illinois. Anxiety, growing pains, and some good old fashioned storytelling are key elements of his tunes. He will open up the event by sharing a few of these songs.

Gates open at 6 p.m., and vehicles are asked to arrive at that time to ensure vehicle placement for showtime. Tickets are $30 in advance and $40 at the gate.

Due to safety protocols around COVID-19 and general health and safety, concert-goers must remain in the vehicles during the show. For any questions regarding tickets, protocols, or the event in general, contact the NLFB team at marketing@nlfb.ca or 705-674-5512.

sud.editorial@sunmedia.ca

Twitter: @SudburyStar

Visit link:
Sudbury to hold first drive-in concert - Sherwood Park News

Four Out of Eight Doesn’t Cut It: The IP Safeguards that Most Lawyers Miss When Protecting Software – IPWatchdog.com

Eight safeguards are essential for a full, robust software protection regime. [But most lawyers] only learned about four of them in law school. In todays world, lawyers need to go beyond law school and include real-world, practical solutions to augment the legal protections that are their bread and butter.

Software is an extremely valuable good for those who produce it because it provides value to the softwares end users. That value, however, also makes it a target for those who would prefer to obtain the value without compensating the software producer. As a result, like with any valuable asset, software suppliers and Internet of Things (IoT) companies must implement safeguards to protect it. Since software is intellectual property, attorneys who work for or advise software producers (which, lets be honest, is just about every technology company these days, given the addition of hardware manufacturers via the ubiquity of their smart devices to the existing desktop, mobile, and SaaS applications that we all use in both our personal and business lives), are frequently asked to advise on how to best protect this valuable asset. Unfortunately, as discussed below, most lawyers only deliver half of what they should.

Eight safeguards are essential for a full, robust software protection regime. Despite that, most lawyers talk about only four of them. In their defense, they only learned about four of them in law school, which is why thats their go-to advice. But in todays world, lawyers need to go beyond law school and include real-world, practical solutions to augment the legal protections that are their bread and butter. This article will review all eight, but the bulk of the discussion will illustrate the importance and usefulness of the four less-frequently discussed methods.

The first four methodswhich lawyers already know about and take action onfocus on protecting software from a purely legal perspective:

While each legal protection discussed above is great and must be considered, the truth is that the best outcome for the software producer is when they never have to rely on the legal protections at all. Enforcing legal rights is expensive, time consuming and distracting to a software producer that would much rather focus on developing the next product instead of defending the last one. How do you do that? How do you avoid having to actually rely on the legal protections we just laid out? By using technology to prevent the software from being misused or overusedintentionally and unintentionallyin the first place. Below are some practical solutions your clients can deploy:

The final software protection strategy is less about protection of the software producers rights; its more related to solidifying the foundation of the software producers products. Most commercial code today relies to some degree (and often to a great degree) on the use of open source software components. Open source software, as referenced above, is code developed by third parties and then incorporated into a software producers final product. While theres usually no fee charged by the open source provider, the open source code is subject to contractual requirements. Given that this whole article is about how to protect your clients software from misuse, it would be ironic if your client then failed to take the necessary steps to ensure it didnt misuse open source software! Unfortunately, open source software is unmanaged by most companies. On average, most software producers are aware of and manage a mere 5% of the open source software used in their products. Failure to fully document and understand open source usage and to comply with relevant obligations can undo all of the careful work taken through the first seven safeguards above.With this in mind, the final safeguard is:

In 1941, baseballs Ted Williams of the Boston Red Sox finished the season with a .406 batting average, meaning he successfully reached base in .406 of his at bats. In the 79 years since, no one has equaled that feat, heralded as one of baseballs unbreakable records. While thats impressive, it means Mr. Williams failed almost 60% of the time! In the world of software protection, the good news is that most lawyers are out-hitting Ted Williams, usually deploying 50% of the protection schemes available to them. The bad news: 50% isnt cause for celebration. Using the additional four practical protections discussed above is what will provide a comprehensive approach to protecting software. Eight out of eightthats something to celebrate.

Image Rights acquired by 123RF.com

Marty Mellican is Vice President and Associate General Counsel at Revenera (formerly known as Flexeras Supplier Division).

Visit link:

Four Out of Eight Doesn't Cut It: The IP Safeguards that Most Lawyers Miss When Protecting Software - IPWatchdog.com

Mintegral SDK Going Open-Source For Increased Transparency And Security – PRNewswire

BEIJING, Sept. 4, 2020 /PRNewswire/ --Following recent allegations raised about the ad data collected by the Mintegral SDK, Mintegral has announced its SDK will officially become open-source. While Mintegral denies recent allegations, the move to open-source the SDK is seen as a way to increase transparency within the ad-tech industry and provide complete visibility into its inner workings.

The move to open-source SDK

Mintegral's move to open-source SDK allows mobile developers access to learn exactly how it works, without any concerns around unwanted code and/or functionality. Since everyone will be able to access it (and improve upon it), this will make the Mintegral open-source SDK more secure. In turn, it will be easier and faster to identify & address any risk as the SDK will undergo constant review by the mobile community.

Open-source SDK is the future

Moving to open-source SDKs benefits the whole mobile ad industry, by providing its customers and end-users with increased transparency and security, while further emphasizing features like speed, quality, and customizability. With this move, Mintegral is also encouraging other members of the mobile industry to do the same in order for the entire advertising ecosystem to thrive.

"We are about to move our industry-leading SDK to an open-source environment, and I am excited about the possibilities that come with it, not just for our team, but for all our partners, and ultimately everyone in the mobile ad industry. I believe transparency is a crucial tool towards a stronger and safer industry, and we will work hard to make sure that our partners and our clients will constantly benefit from the highest transparency and security standards available," said Erick Fang, Mintegral CEO.

As the industry demands more transparency from its players, Mintegral believes it's important to provide full clarity around the SDK and its capabilities. As a COPPA-certified member, and fighting ad fraud with the adoption of App-Ads.txt; to getting open measurement SDK certification from IAB Tech Lab and adding support for Sellers.json and Supply Chain Object, Mintegral has always been a major advocate for data privacy, security, and transparency.

What's next?

Mintegral plans on rolling out the open-source SDK within the next week and will notify their partners and clients as soon as such changes have been made. Mintegral will also release a breakdown of how the SDK operates in a future article. For more news regarding this matter, clients are welcome to reach out to the Mintegral team or follow news through the Mintegral blog.

Contact: [emailprotected]

SOURCE Mintegral

https://www.mintegral.com/en/

Excerpt from:

Mintegral SDK Going Open-Source For Increased Transparency And Security - PRNewswire

Q&A: Open Source advocate David Strejc on why the Czech IT industry is so overpriced – Expats.cz

When the initial wave of news regarding the coronavirus crisis and response from the Czech government first came out back in March, Expats.cz experienced a surge in traffic that saw our numbers balloon to record highs at peak times. Our news server couldnt handle the volume, and immediately crashed as the traffic spiked.

Thankfully, we had the right support on hand: David Strejc of WPDistro, a Prague-based developer who specializes in WordPress and open source solutions. Since migrating to WPDistros servers six months ago, our news site hasnt seen any downtime at all.

David is also a long-term advocate of Open Source software solutions, and provides the CRM system AutoCRM, which builds upon a solution used by more than 50,000 companies worldwide.

We recently spoke with David about the Information Technology sector in the Czech Republic, how it compares to the rest of the world, and whats in store for the future.

Hi David! Whats your IT background and expertise?

Im a long-term IT guy, currently 16 years in the IT business. Ive worked for two telco providers, O2 and T-Mobile, as IT Architect and Senior Solution designer, respectively, established the company Easy Software with my partners, and sold my share in it. They are currently providing SaaS around the globe for more than 3000 companies. I was an IT Architect, CTO there.

What about IT here in the Czech Republic?

On one side we are the fifth biggest software producer in terms of absolute numbers here in the Czech Republic. Companies like Red Hat and others rely on Czech programmers. We are second in some programming skill tests sometimes behind Slovaks. We have great people in IT, highly-skilled, hard-working.

On the other hand, we are four years behind actual trends in US.

How can this be?

There is huge difference between the IT sector, and companies which are IT product consumers. Take an average factory or a classic office, for example. They still mentally live in 90s. And many producers of software here in the Czech Republic make software designed for this kind of mentality. Updates come twice a year, no one answers your email for more than a week, and no one picks up your urgent call for support.

What do you think of world trends in IT compared to the Czech Republic?

Take companies like Facebook, Google, and now even Microsoft. We live in crazy times where the current trend is to deploy software changes, security patches, new features and so on in matter of days. Sometimes daily. Even more. This is called CI/CD. Continuous integration, continuous deployment.

Is this what makes IT/ICT so expensive to have skilled people pushing code so quickly to production?

Actually, it is not. It is the opposite. The extreme overpricing of IT here in the Czech Republic, and other countries too, is due to the duplication of work.

Take for example CRM or ERP systems. There are dozens of them. Even here in the Czech Republic we have nearly 50 well-established companies producing the same solutions. And all of them are producing the same thing, reinvented 50 times. Multiply those 50 software solutions by 1020 programmers each, and you get extreme monthly costs.

Isnt that OK? Isnt software about freedom of choice?

Exactly. But when you buy one of those solutions you are done. Your freedom of choice ends when you implement one of these classically-produced software solutions. We have a great example here in the Czech Republic where our Ministry of Finance has a nearly 30-year-old software yes, you read that correctly financing bureaus thanks to old contracts with IBM who didnt bother to give us permission to use their software without them again, you read it correctly. It is still their software due to their policy we cant simply switch to another supplier.

And it is the same with nearly all of those classic Czech software producers. Duplicating work, catching their customers into a vendor-locking net like fish, and than like parasites draining money from them for every patch, every software update, every overpriced feature. Once you are caught you can say goodbye to your IT freedom. Those companies need the money to feed their highly overpriced programmers who are duplicating the same functions as their competitors.

And this leads to high prices for the end consumer.

Its a classic case of reinventing the wheel. But many, many, many times over. And you need many, many, many inventors.

And the HR market is happy. Programmers and IT guys cost a lot of money, HR takes them as resource/product and delivers them into company where they last for two years, and then they move to another one. Job companies are happy, programmers are happy they ask for more and more. The only unhappy one is the final customer who comes from outside of the IT sector.

Not only is the final product highly overpriced, but the customer is also locked into the vendor, and this staffing fluctuation in the IT industry causes even more trouble. Because Franta, lead programmer at ABC Softcorp s.r.o., is suddenly gone. And no one knows what exactly this function is for, or how we will solve that, and now we have to rename half of our code because it is written in Czech yes, this is still the case, they are naming variables in the Czech language.

Is there an alternative? How is IT made in other countries, like the US?

These days, my long-term predictions are coming true. For 16 years I have been an Open Source advocate. Not because of the price. Not because of the security (it can be proven that OSS is more secure than classic software take OpenBSD for example), not because I can dig into code, but because of the resources. Because when we compete too much, we destroy, we die.

Companies like Facebook, Google, LinkedIN and many many others based in US now even Microsoft, who loves Linux and Open Source they have discovered that IT is such a different area, that we can produce software that even competitors collaborate on.

Give us an example.

Engineers at Red Hat are producing an Open Shift platform for development, for example. And their customers have dedicated teams of their own engineers sitting in offices or home offices patching, debugging, and improving this huge platform. So customers are helping with their time, money, project management and all that stuff to have better software from their supplier.

And this is across the entire modern IT world. My old way of doing things in Unix way is now becomes a reality. IT companies who compete and try to destroy their competitors like here in the Czech Republic will die out. Companies that support each other will produce more complex, more stable solutions for less money which is Open Source software in many cases. MariaDB, which is OSS, says they can replace Oracle, which costs $50,000 for one core per year.

Are there any other advantages of OSS?

You can say, I dont like you anymore, producer of ABC OSS software. I will take Youngsters Ltd. they know what they are doing, they are young and full of energy. They pick up my phone after one ring, they answers my emails immediately. And the OSS producer cant say: you cant do that, it is our licensed software which we are only leasing you. Everything is ours. No Youngsters Ltd. will take over support and development of the new features. But a whole community of developers is contributing to bug fixes, feature requests, and deciding the best plan for this ABC OSS software.

You dont have to use Microsoft SQL or Oracle which only adds expenses for no value. MariaDB clusters are now used even in mission-critical areas of the banking industry. Facebook runs the biggest cluster of MySQL databases in the world. Imagine what Facebooks bills for Oracle would be if they were using it.

Youve mentioned Facebook do you like what theyre doing?

I am not big fan of social networks, but I am big fan of the philosophy. Unix is more of a philosophy than only a software. It is way of doing things. And I love it. Facebook has inspired me in many ways. They produced the first versions of Facebook using PHP programming language (which many so-called skilled programmers hate) and have used MySQL databases from beginning. Craziness? I dont think so. PHP programmers were many times cheaper, had more availability, and were easier to hire.

Cheaper and more effective. Open Source from beginning. Even now Facebook produces Open Source software and gives it to public, such as React and many others that are not as well known.They alsocreated the Open Compute Project. They give out designs and plans for hardware, servers, racks and so on.

And this is the philosophy of Open Source, Unix, and original hacker culture. Do it for fun. Do it for the lowest available cost, because we live in agile world and we want to see proof of concept in the shortest available time. We dont want to wait year or two the whole industry can shift somewhere else. We want a software solution now so we can put our hands on it and tweak it to fit our needs.

This sounds like an entirely different approach to producing software.

Its actually very old concept. But when Microsoft, Oracle and others took over the industry in 80s and 90s with their waterfall methods and postponing of releases, planing years ahead they changed peoples perspectives. Now, pure technology is winning over overpriced business. In 2019, CNBC produced a video about Open Source software taking over whole industry.

Are you afraid of Open Source? Do you consider it unsecure? You are using it daily. Its in your pocket in the form of Android or iOS. When you are browsing web using Chrome or Firefox as a user and on the other side there is Apache or Nginx, which are both Open Source. More than 60% of virtual machines on Azure are Linux, Facebook runs on MySQL clusters, Google has put their container platform Omega into public form in Kubernetes, every last one of the top 500 supercomputers in the world runs Linux. The current zeitgeist is Open Source.

And do you produce Open Source in your current company?

We focus on the harder part of Open Source. We provide support, writing the little pieces of code which are not yet available because we are mainly working with WordPress, which is a great example of Open Source taking over the world. WordPress is powering 38% of the top 10,000 websites in the world. It actually powers more than 35% of entire web industry in the world. And its still growing.

This is great example of many developers collaborating on one platform. Microsoft uses WordPress on their news.microsoft.com, Facebook uses it on about.fb.com, the White House is using it, The New York Times, Tech Crunch, and many other big brands.

The advantages are obvious. You can instantly hire a developer who is familiar with the code, with the principles, with the documentation and culture. Web administrators in companies are familiar with the interface they are working in. If your supplier, which can be even us, doesnt suit your communication style, you can search Google for new one and find them in two minutes.

Is WordPress the only solution you specialize in?

We focus on bigger e-commerce sites and lets say high-end WordPress websites for now. But we find our company brain elsewhere. We have a great Open Source CRM software that is enormously powerful in matters of flexibility, speed, software philosophy (it is headless, so it can easily be connected with any other application through REST API). Right now we are building a website to offer this solution to other companies here in the Czech Republic and around Europe.

For more information about WPDistro, and how they can help your online business, visit their official website.

More:

Q&A: Open Source advocate David Strejc on why the Czech IT industry is so overpriced - Expats.cz

Threema encrypted messaging apps will soon be open source – SlashGear

Threema, an encrypted messaging service that offers a substantial number of features, has announced a big business change that may increase some otherwise skeptical users trust in the platform. In its announcement, the Threema team said its messaging apps will soon be made fully open source, making it easier to independently review the apps security and verify their code.

While theres no lack of secure messaging apps on the market, some of them are more private than others. There are messaging services where the messages reside on the companys servers, then there are encrypted messaging services where the company isnt able to access the users data. Threema falls into the latter category.

Unlike apps like Telegram, which is more targeted at the average consumer, Threema is a higher-end product that includes a variety of features, including support for voice and text messages, groups, distribution lists, and sending files like MP3s and PDFs. As well, users can share locations and images/videos.

When compared alongside the more popular encrypted messaging app Signal, there was both an upside and a downside. The upside? Threema assigns the user a unique ID, eliminating the need to use a phone number. The downside? Threema wasnt open-source, unlike Signal, something that was a concern for some potential users.

In its update on Thursday, Threema announced that it has partnered with Afinum Management AG and that it is making its apps open source. This open-source change only applies to the apps, not the backend, but Threema notes that it has and will continue to conduct regular external reviews. Likewise, Threema says its users will soon be able to use multiple devices in parallel without compromising their data.

More:

Threema encrypted messaging apps will soon be open source - SlashGear

Facebook to warn third-party developers of vulnerable code – TechCrunch

Facebook has announced a policy change that will see the company notify third-party developers if it finds a security vulnerability in their code.

In a blog post announcing the change,Facebook said it may occasionally find critical bugs and vulnerabilities in third-party code and systems. When that happens, our priority is to see these issues promptly fixed, while making sure that people impacted are informed so that they can protect themselves by deploying a patch or updating their systems.

Facebook has previously notified third-party developers of vulnerabilities, but the policy shift formally codifies the companys policy toward disclosing and revealing security vulnerabilities.

Vulnerability disclosure programs, or VDPs, allow companies to set the rules of engagement for finding and disclosing security bugs. VDPs also help guide the disclosure and publication of vulnerabilities once a bug is fixed. Companies often use a bug bounty to pay hackers who follow the companys reporting and disclosure rules.

The policy change is not entirely altruistic. Facebook, like many other tech companies, relies on a ton of third-party code and open-source libraries. But by putting the change in writing, it also puts third-party developers on notice if they dont fix vulnerabilities in a timely fashion.

Casey Ellis, founder and chief technology officer at vulnerability disclosure platform Bugcrowd, said the policy shift was becoming increasingly popular for companies with a large, user-centric, third-party attack surface, and echoes similar efforts by Atlassian, Google and Microsoft.

Facebook said when it finds a vulnerability, it will give third-party developers 21 days to respond and 90 days to fix the issues, a widely accepted time frame to report and remediate security issues. The company says it will make a reasonable effort to find the right contact for reporting a vulnerability, including, but not limited to, emailing security reporting emails, filing bugs without confidential details in bug trackers or filing support tickets. But the company said it reserves the right to disclose sooner if the vulnerability is actively being exploited by hackers, or delay its disclosure if its agreed that more time is needed to fix an issue.

Facebook said it will generally not sign a non-disclosure agreement (NDA) specific to the security issues it reports.

Katie Moussouris, founder of Luta Security, told TechCrunch that the devil will be in the details.

The test will be the first time they have to pull the trigger and drop a zero-day with mitigation guidance on a competitor, she said, referring to unpatched vulnerabilities where companies have zero days to patch them.

The new policy is focused specifically on how Facebook handles disclosure of issues in third-party code. If researchers find a security vulnerability on Facebook, or within its family of apps, they will continue to report it through the existing Bug Bounty Program.

As part of the policy change, Facebook said it would also disclose vulnerabilities once they are fixed. In a separate blog post, Facebook, which owns WhatsApp, disclosed six vulnerabilities in the messaging app since fixed.

Read the original:

Facebook to warn third-party developers of vulnerable code - TechCrunch

12 thoughts on Building An Open Source ThinkPad Battery – Hackaday

If you own a laptop thats got a few years on the clock, youve probably contemplated getting a replacement battery for it. Which means you also know how much legitimate OEM packs cost compared to the shady eBay clones. You can often get two or three of the knock-offs for the same price as a single real battery, but they never last as long as the originals. If they even work properly at all.

Which is why [Alexander Parent] decided to take the road less traveled and scratch built a custom battery for his ThinkPad T420. By reverse engineering how the battery pack communicated with the computer, he reasoned he would be able to come up with an open source firmware that worked at least as well as what the the third party ones are running. Which from the sounds of it, wasnt a very high bar. From a more practical standpoint, it also meant hed be able to create a higher capacity battery pack than what was commercially available should he chose to.

A logic analyzer wired in between one of the third party batteries and a spare T420 motherboard allowed [Alexander] to capture all the SMBus chatter between the two. From there he wrote some Arduino code that would mimic a battery as a proof of concept. He was slowed down a bit by an undocumented CRC check, but in the end he was able to come up with a fairly mature firmware that even allows you to provide a custom vendor name and model number for your pack.

The code was shifted over to an ATtiny85, with a voltage divider wired up to one of the pins so it can read the pack voltage. [Alexander] says his firmware still doesnt do a great job of reporting the actual battery capacity remaining, but its close enough for his purposes. He came up with a simple PCB design to hold the MCU and support components, which eventually he plans on putting inside of a 3D printed case that actually plugs into the back of his T420.

This project is obviously still in a relatively early stage, but were very interested to see [Alexander] take it all the way. The ThinkPad has long been the hackers favorite laptop, and we can think of no machine more worthy of a fully open hardware and software battery pack.

Read this article:

12 thoughts on Building An Open Source ThinkPad Battery - Hackaday