Mayor Fulop Announces NJ’s 1st Arts & Culture Trust Fund to Generate $1M Annually for Arts Education and Programming Utilizing Community Input -…

Mayor Fulop Announces NJs 1st Arts & Culture Trust Fund to Generate $1M Annually for Arts Education and Programming Utilizing Community Input

Resolution to Reinstate Citys Successful Open Space Trust Fund Goes before City Council Tonight to also Set Rate at Quarter of a Penny

JERSEY CITY Mayor Steven M. Fulopjoins theJersey City Municipal Councilto announce New Jerseys first municipal Arts and Culture Trust Fund will generate $1 million annually in critical long-term funding for Jersey Citys burgeoning arts community. Additionally, the Citys successful Open Space Trust Fund, to which the Arts Trust Fund is being mirrored after, will also bring in a million dollars in tax revenue every year to expand and enhance green park space citywide, based on residents input.

The Arts Fund is being implemented following the November 2020 election where voters largely supported the sustainable funding sources implementation to directly benefit local artists and arts organizations, including youth and community programming, to help them grow and thrive.The City Council will vote tonight to set both tax levy rates at one-quarter of a penny.

Arts and open space are two key quality of life components, especially in urban areas like ours, that have been severely undervalued for far too long. We actively engaged the community, and the voters responded strongly to the need for these responsible revenue streams to strengthen our Citys infrastructure. We can now take the necessary steps to do exactly that, saidMayor Fulop.

The Jersey City Open Space Trust Fund was enacted by the Fulop Administration in 2016. After being put on hold last year amid the extreme financial uncertainty surrounding the pandemic, a resolution will go before City Council today to reinstate the levy.

The Open Space and the Arts Trust Funds received strong support from voters to implement an annual tax not to exceed two cents ($0.02) per one-hundred dollars of assessed property value. Each funding source will bring in approximately $1 million in annual revenue with the implementation of the $.0025 tax levy.

Mayor Fulop spent two years working closely with the Jersey City Arts Council to lobby state legislators to implement the mechanisms that would allow for long-term arts funding. Jersey City was first to take action when the state bill was signed into law by the Governor in December 2019, allowing municipalities to implement an Arts and Culture Trust Fund.

The return on investment in the arts is invaluable to the entire community, not just to artists. Its a powerful tool with social, educational, and economic impacts that will continue to improve all of Jersey City for decades to come. The Arts Trust will generate four times more than what all of Hudson County receives from the State each year to funds arts and cultural programs. Were extremely encouraged by the Mayors partnership with us to see this through after years of advocating together for this critical investment in our City, saidMacadam Smith,Executive Directorof the Jersey City Arts Council.

As part of the administrations commitment to expanding residents access to quality park space citywide, Mayor Fulop recently announced the largest widespread park improvement initiative in decades utilizing over $2 million generated by the Jersey City Open Space Trust Fund. The first allocation of the Open Space Trust Fund is currently updating over 20 parks spanning all six wards based on community input with the historic Reservoir 3 in The Heights being the largest funding recipient.

Access to public park space is proven to improve residents mental and physical health, property values, environmental impacts, community engagement, among other significant benefits.

We created the Open Space Trust Fund Committee to equitably spread significant funding throughout all six wards utilizing community feedback, saidWard B Councilwoman Mira Prinz-Arey. Now we have the potential to create meaningful, long-term support for our arts community, and to ensure we maximize this opportunity, we are using the Open Space Trust Fund and the Open Space Trust Fund Committee as a template to navigate these uncharted waters with the hopes of encouraging others to follow suit.

(Visited 13 times, 13 visits today)

See original here:
Mayor Fulop Announces NJ's 1st Arts & Culture Trust Fund to Generate $1M Annually for Arts Education and Programming Utilizing Community Input -...

RedMonk Ranks Programming Languages Using GitHub and StackOverflow — ADTmag – ADT Magazine

RedMonk Ranks Programming Languages Using GitHub and StackOverflow

Programming language rankings get regular headlines, and they should, at least from trend trackers like us. Among my favorite is the RedMonk quarterly, published this week. I like the methodology of their system, which extracts data from GitHub and Stack Overflow and combines them for "a ranking that attempts to reflect both code (GitHub) and discussion (Stack Overflow) traction."

In other words, it correlates what the cool kids are talking about with actual language usage "in an effort to extract insights into potential future adoption trends." It's a mix that makes it meaningful.

The latest ranking was posted by veteran industry analyst Stephen O'Grady on his RedMonk blog. "GitHub and Stack Overflow are used here, first, because of their size, and second, because of their public exposure of the data necessary for the analysis," he wrote.

O'Grady's post includes thoughtful observations about JavaScript, TypeScript, Ruby, Go, R, Kotlin, Rust, and Dart.

There's quite a lot of change afoot in the programming world, the analysts found, but constant has been the rise of Python, which has maintained its top ranking ahead of Java. "Java was extremely hot on Python's heels and was in fact closer to the number one ranking than to PHP behind it but Python's ability to defend its new high ranking is notable," O'Grady wrote.

Half of the Top 20 languages experienced "a degree of movement," O'Grady added, "which is very unusual. It's difficult to attribute this definitively to any higher level macro trends, but the data is consistent with an industry that picked the pace back up in the last two quarters of the year after the initial chaos of lockdowns and so on gave way to livable if extremely suboptimal new routines."

JavaScript is holding its own in the rankings. "[I]t is worth noting just how robust JavaScript's performance remains," O'Grady observed. "In spite of all of the competition from up and coming languages, all the discussion of fragmentation and even criticisms of JavaScript the language itself, it remains remarkably popular.

JavaScript pull requests are up 453% since Q1 of January of 2018, and they were up 96% from the last quarter "on an already massive base of commits." '

'Simply put, JavaScript remains its detractors notwithstanding a force of nature like no other within the industry," he wrote, "and there are no indications in the data that this is likely to change any time soon."

TypeScript, which is a superset of JavaScript, moved up for the sixth of its latest eight quarterly RedMonk rankings, "and its popularity is evident when one looks around the industry." Ruby is on a gentle long-term downward trajectory, the analysts found. Go is slipping, too. R, a language for statistical computing and graphics, appears to be on a slow upswing. Both Kotlin and Rust showed signs of growing popularity. And Dart, an open source, purely object-oriented, optionally typed, and a class-based language, has risen since the advent of the Flutter framework.

Th RedMonk report surrounds a cool plotting of the language rankings with detailed analysis of key trends over the past quarter. As far as I'm concerned, it's a must read.

Posted by John K. Waters on 03/04/2021 at 9:18 AM

The rest is here:
RedMonk Ranks Programming Languages Using GitHub and StackOverflow -- ADTmag - ADT Magazine

Jupyter has revolutionized data science, and it started with a chance meeting between two students – TechRepublic

Commentary: Jupyter makes it easy for data scientists to collaborate, and the open source project's history reflects this kind of communal effort.

Image: iStockphoto/shironosov

If you want to do data science, you're going to have to become familiar with Jupyter. It's a hugely popular open source project that is best known for Jupyter Notebooks, a web application that allows data scientists to create and share documents that contain live code, equations, visualizations and narrative text. This proves to be a great way to extract data with code and collaborate with other data scientists, and has seen Jupyter boom from roughly 200,000 Notebooks in use in 2015 to millions today.

Jupyter is a big deal, heavily used at companies as varied as Google and Bloomberg, but it didn't start that way. It started with a friendship. Fernando Prez and Brian Granger met the first day they started graduate school at University of Colorado Boulder. Years later in 2004, they discussed the idea of creating a web-based notebook interface for IPython, which Prez had started in 2001. This became Jupyter, but even then, they had no idea how much of an impact it would have within academia and beyond. All they cared about was "putting it to immediate use with our students in doing computational physics," as Granger noted.

Today Prez is a professor at University of California, Berkeley, and Granger is a principal at AWS, but in 2004 Prez was a postdoctoral student in Applied Math at UC Boulder, and Granger was a new professor in the Physics Department at Santa Clara University. As mentioned, they first met as students in 1996, and both had been busy in the interim. Perhaps most pertinently to the rise of Jupyter, in 2001 Prez started dabbling in Python and, in what he calls a "thesis procrastination project," he wrote the first IPython over a six-week stretch: a 259-line script now available on GitHub ("Interactive execution with automatic history, tries to mimic Mathematica's prompt system").

SEE:Top 5 programming languages for data scientist to learn (free PDF)(TechRepublic)

It would be tempting to assume this led to Prez starting Jupyter--it would also be incorrect. The same counterfactual leap could occur if we remember that Granger wrote the code for the actual IPython Notebook server and user interface in 2011. This was important, too, but Jupyter wasn't a brilliant act by any one person. It was a collaborative, truly open source effort that perhaps centered on Prez and Granger, but also people like Min Ragan-Kelley, one of Granger's undergraduate students in 2005, who went on to lead development of IPython Parallel, which was deeply influential in the IPython kernel architecture used to create the IPython Notebook.

However we organize the varied people who contributed to the origin of Jupyter, it's hard to get away from "that one conversation."

In 2004 Prez visited Granger in the San Francisco Bay Area. The old friends stayed up late discussing open source and interactive computing, and the idea to build a web-based notebook came into focus as an extension of some parallel computing work Granger had been doing in Python, as well as Prez's work on IPython. According to Granger, they half-jokingly talked about these ideas having the potential to "take over the world," but at that point their idea of "the world" was somewhat narrowly defined as scientific computing within a mostly academic context.

Years (and a great deal of activity) later, in 2009, Prez was back in California, this time visiting Granger and his family at their home in San Luis Obispo, where Granger was now a professor. It was spring break, and the two spent March 21-24 collaborating in person to complete the first prototype IPython kernel with tab completion, asynchronous output and support for multiple clients.

By 2014, after a great deal of collaboration between the two and many others, Prez, Granger and the other IPython developers co-founded Project Jupyter and rebranded the IPython Notebook as the Jupyter Notebook to better reflect the project's expansion outwards from Python to a range of other languages including R and Julia. Prez and Granger continue to co-direct Jupyter today.

"What we really couldn't have foreseen is that the rest of the world would wake up to the value of data science and machine learning," Granger stressed. It wasn't until 2014 or so, he went on, that they "woke up" and found themselves in the "middle of this new explosion of data science and machine learning." They just wanted something they could use with their students. They got that, but in the process they also helped to foster a revolution in data science.

How? Or, rather, why was it that Jupyter has helped to unleash so much progress in data science? Rick Lamers explained:

Jupyter Notebooks are great for hiding complexity by allowing you to interactively run high level code in a contextual environment, centered around the specific task you are trying to solve in the notebook. By ever increasing levels of abstraction data scientists become more productive, being able to do more in less time. When the cost of trying something is reduced to almost zero, you automatically become more experimental, leading to better results that are difficult to achieve otherwise.

Data science is...science; therefore, anything that helps data scientists to iterate and explore more, be it elastic infrastructure or Jupyter Notebooks, can foster progress. Through Jupyter, that progress is happening across the industry in areas like data cleaning and transformation, numerical simulation, exploratory data analysis, data visualization, statistical modeling, machine learning and deep learning. It's amazing how much has come from a chance encounter in a doctoral program back in 1996.

Disclosure: I work for AWS, but the views expressed herein are mine.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Read the original here:
Jupyter has revolutionized data science, and it started with a chance meeting between two students - TechRepublic

The Shed Plans to Reopen for Covid-Tested Audiences – The New York Times

The New York City arts scene is about to pass another milestone on the road to reopening: The Shed, a large performing arts venue in Hudson Yards, said Wednesday that it would hold a series of indoor performances next month for limited audiences in which everyone has either been tested for the coronavirus or vaccinated against it.

The Shed said it would present four events next month: concerts by the cellist and vocalist Kelsey Lu, the soprano Rene Fleming and a string ensemble from the New York Philharmonic, and a comedy set by Michelle Wolf.

Each of the performances will be open to up to 150 people, all masked, in a space that can seat 1,280. The Shed said it would require patrons to present confirmation of a recent negative coronavirus test, or confirmation of full vaccination; requiring testing allows the Shed admit the largest number of audience members allowed under state protocols.

In these first steps, theres limited capacity, but you have to start somewhere, said the Sheds artistic director, Alex Poots. Those first steps are really important for us, for our audiences and for our artists just the idea that we might return to something joyful.

The Shed is the third New York City arts presenter to announce this week specific plans for a resumption of programming, following last weeks announcement by Gov. Andrew M. Cuomo that arts and entertainment organizations could begin presenting indoor work for limited-capacity audiences. On Tuesday, the commercial producer Daryl Roth said she would present Blindness, an audio adaptation of the Jos Saramago novel, to audiences of up to 50 at her Union Square theater, and the Park Avenue Armory said it would present a series of music, dance, and movement works, starting with a piece by Bill T. Jones for an audience of 100. The Armory said it would require ticket buyers to take an on-site rapid coronavirus test, for free, before entering.

Poots said the Shed would start with music and comedy because both have universal appeal, and they also align well with the guidelines that have emerged.

It gets far more complex when you get into more intricate art forms that require a lot of costume changes or close-up rehearsal, he said. The productions are small, but not tiny; Lu will be accompanied by 14 musicians, and the Philharmonic ensemble will include 20 players. None of the performances will have intermissions.

The first performer, Lu, is planning to present an opera called This is a Test.

I have been waiting for this day its been too long, Lu said. Theres nothing like that exchange between audience and performer. Its left a void for me and so many of us.

The Shed, like many arts institutions, canceled programs starting March 12 of last year. Since then, it has presented a visual art exhibition, of work by Howardena Pindell; a filmed rendition of a play, November by Claudia Rankine, and an online digital works series. But these April events will be the first live performances with paying audiences. The Shed has some considerable architectural advantages under the circumstances it is a new building with a state-of-the-art HVAC system capable of fully refreshing the breathable indoor air every 30 minutes, and its 18,000-square-foot main performance space opens directly to the outside.

The Shed is planning to follow the April performances by, in May, hosting the Frieze New York art fair for the first time, and in June, hosting Open Call, a program for early career artists, as well as programs in collaboration with the Tribeca Film Festival. Poots said that he hopes that by fall, things will be getting a lot easier, in terms of capacities and regulations.

Read this article:
The Shed Plans to Reopen for Covid-Tested Audiences - The New York Times

itbusinessedge > Articles > Best Platform-as-a-Service Tools 2021 – IT Business Edge

Software development is no easy task, and platform maintenance, resource planning, and buying the right software licenses can further complicate it. Platform-as-a-service (PaaS) solutions remove some of these complexities, allowing developers to focus on what they do best. With PaaS, developers only have to worry about managing the applications or software they develop, and the PaaS provider handles everything else, including any platform maintenance, development tools, and database management.

Imagine how much more productive your developers can be when they dont have to worry about maintaining a development platform. To find the best PaaS tool for your business, you need to have an idea of what youre going to use it for and how much experience your developers have. For novice devs creating simple apps, consider low-code applications. If youre confident in the skills of your team, you should look for pay-as-you-go platforms that only charge while your code is running. And if you offer custom-built websites, choose a platform that lets you build, design, and host them all in one place. Once youve got your shortlist narrowed down, take advantage of free plans and trials when you can to find the platform that fits your needs.

To make it easier for developers to find the right PaaS service, weve created this guide comparing the top platform-as-as-service tools of 2021.

Key takeaway: Google App Engine is a solid choice for app developers who use major programming languages and dont want to handle their own maintenance.

Google App Engine offers a fully managed platform thats perfect for building both web and mobile applications. It supports the most popular coding languages, including Python, Java, C#, and PHP. Within App Engine, you get solid logging and monitoring tools to help you diagnose the health of your app, allowing you to identify and fix bugs quickly. The service runs on a pay-as-you-go model, so you only pay for the resources you use. Additionally, App Engine only consumes resources when your code is running.

Key takeaway: Plesk is best for web developers and designers who use custom code on their sites and need a platform that offers both development and hosting capabilities.

Along with application development, Plesk also provides a platform to create and host custom websites. The ready-to-code environment supports PHP, Java, Ruby, and most other major programming languages. Plesk is also available in 32 different languages. The self-repair tools allow you to handle technical issues without contacting support, and the Plesk mobile app lets you manage sites and servers on the go. Pricing is done on a monthly basis, and there are several different tiers available to fit your needs.

Key takeaways: AWS Elastic Beanstalk is best for applications that have already been coded and just need to be deployed or scaled.

AWS Elastic Beanstalk helps developers deploy and scale applications theyve already created. Developers simply need to upload the code into the platform, and Elastic Beanstalk automatically deploys it, including monitoring the applications health and load balancing. It supports popular coding languages like Java, Ruby, Go, and Docker and familiar servers, including Apache, Passenger, and Nginx. As a service, Elastic Beanstalk is free to use; developers only pay for the AWS resources they use to store and run their applications.

Key takeaway: Platform.sh is a strong contender for developers who need a platform that supports both application development and web design.

Platform.sh allows you to develop, deploy, manage, and secure applications and custom websites from a single platform. The tool supports a large number of coding languages and frameworks, including Ruby, Drupal, WordPress, and Python. The Source Operations feature enables your code to update itself to cut down on your maintenance time, although you do need to upgrade from the basic package to get this option. There are three pricing tiers for you to choose from, and you can add more storage to each plan as needed.

Key takeaway: Azure Web Apps provides a solid, pay-as-you-go option for developers looking to build Windows or Linux-based applications.

Azure Web Apps offers a platform with continuous deployment and support for both Windows and Linux platforms. The tool offers source code integration from GitHub, one-click publishing from the Microsoft Visual Studio, and live debugging to improve the productivity of your development team. Azure web apps also provide an end-to-end view of your applications health, allowing you to make calculated decisions on how to best improve your apps. There are six pricing tiers to choose from, and costs are billed hourly based on the resources you use.

Key takeaway: IBM Cloud Foundry provides an open-source platform that gives developers a community of support and extra resources to improve their applications.

IBM Cloud Foundry is an open-source PaaS tool that prioritizes the speed and ease of development. Third-party services like APIs or community build packs are available through a marketplace to improve functionality and give developers a community of support. Cloud Foundry allows you to customize your development experience thanks to several different hosting models. Additionally, the platform is fault tolerant it automatically replicates if an instance fails or duplicates if it needs more performance. There is a free tier available, or you can pay for resources as you use them; there are no upfront costs.

Also read: Changing Application Landscape Raises New Cybersecurity Challenges

Key takeaway: Zoho Creator is a great option for developers with little coding experience thanks to the low-code nature of the platform.

Zoho Creator is a low-code app development platform allowing you to build both simple and complex applications. The tool offers pre-built templates, visual builders, and code editors to simplify the development process and add automations, improving workflow management. Because the platform is low-code, its designed to be used by anyone, not just highly skilled developers. There are three pricing tiers to choose from, and you can take advantage of a 15-day free trial.

Also read: No Code and Low-Code Coupled with SaaS Platforms Rise to the COVID-19 Challenge

Key takeaway: Dokku is a free, PaaS platform best for developers looking to build applications on a budget.

Dokku is a PaaS tool powered by Docker that can be installed on any hardware or cloud provider. You can write plugins in any language and share them online with others, or you can take plugins that others have made and extend them to fit your needs. The platform is free; all you have to do is install the code on your hardware or cloud environment, and you can be up and running in just a few minutes. Once its live, you can use Git to add Heroku-compatible applications.

Key takeaway: Salesforce Platform is designed for companies already using Salesforce that want to build applications to improve its functionality.

Salesforce Platform allows you to tailor Salesforce to meet all of your companys needs. You can add artificial intelligence (AI) to your apps and code in the language youre most comfortable with using Heroku. Not only can you build apps to improve Salesforces functionality, but you can also customize the user interface to better fit your companys needs. Salesforce Platform is an add-on for the CRM software, so you will need a plan to get access.

Also read: Salesforce Extends Scope of Customer Experience Management Effort

Not all of the products on this list are going to be right for every company. You need to determine your priorities and ensure your developers have the necessary expertise to use the platform you choose.

If youre designing add-ons for software you currently use, like Salesforce, check their offerings to see if they provide any kind of PaaS before you invest in your own. Not only will this save you money, but youll know the application you create will be able to fit into the existing software.

The nice thing about many PaaS solutions is that you pay as you go, so you can try out a few different options before deciding on the right one for you.

Read next: Best Practices for Application Security

Read more:
itbusinessedge > Articles > Best Platform-as-a-Service Tools 2021 - IT Business Edge

We produce happy and angry expressions more rapidly than sad expressions – Tech Explorist

Perceiving and deciphering is a vital part of social interaction. While we comprehend the spatial qualities of an expression how the mouth moves in a smile, for example the speeds at which expressions are produced are often overlooked.

The ability to get on and quickly decipher these signs could likewise assist individuals with deciding facial expressions even when mask-wearing might limit other visual cues.

A recent study quantified the speed of changes in distance between key facial expressions. Conducted by the University of Birmingham, scientists found that people tend to produce happy and angry expressions more rapidly, while sad expressions are made more slowly.

Lead author Dr. Sophie Sowden said,Better understanding how people interpret this important visual cue could give us new insights into the diagnosis of conditions such as Autism Spectrum Disorder or Parkinsons Disease. This is because patients with these conditions often recognize facial expressions differently, or exhibit expressions differently.

In the examination, people were asked to generate facial expressions directed at a camera. They used an open-source programming program called OpenFace to track their facial movement. They estimated the speed of movement in regions of the face known to be significant in producing expression, including around the eyebrows, the nose, and the mouth, just as across the face in general.

During the first part of the experiment, scientists studied the average speed at which participants produced different expressions. In this part, participants were asked to produce posed expressions, as well as expressions during the speech, and spontaneous expressions were recorded in response to emotion-inducing videos. Interestingly, they showed differences in speed across emotions depends on the region of the face and the type of expression being considered.

In a second phase of the study, the team investigated what would happen if they captured schematic versions of facial expressions produced and manipulated the speeds involved. In this experiment, the scientists found that people would get better at recognizing it as happy or angry as the expression was speeded up. In contrast, if it were slowed down, people would more accurately identify it as sad.

Scientists believe that this study could pave the way towards diagnosing autism and Parkinsons disease. It could also be useful in a range of artificial intelligence applications such as facial recognition software.

The rest is here:
We produce happy and angry expressions more rapidly than sad expressions - Tech Explorist

Containers: Learning from the pioneers – ComputerWeekly.com

Todays modernity is tomorrows legacy. Very few established businesses are blessed with homogeneity when it comes to the technologies and suppliers that support their IT operations. Applications are developed according to prevailing programming and deployment models. Virtualised servers allow enterprises to run established core applications on modern hardware and so avoid the potentially significant costs, risks and disruption of rebuilding.

Organisations regularly talk of transforming their operations to support new ways of engaging. Within this, the demand for modernised applications features prominently as a desired goal.

The Covid-19 pandemic has further sharpened a focus on what many see as the core of a modern application. It needs to be resilient, consistent and secure, architected as a lightweight modular programming model for rapid deployment and scalability.

Containers, with Kubernetes, the open source container orchestration and management platform, offer a modern, lightweight application model for quick deployment of operations based on modular, transient and immutable services. They are becoming more popular as they meet the demand for applications that can scale as necessary, whether on-premise in a managed datacentre or deployed to a public or private cloud.

Importantly, containers offer consistency and resilience, and form part of the technologies built for cloud-native delivery, multicloud and broader hybrid IT operations.

However, there is a tendency for every narrative about modern applications to be framed in the context of container technology. The reality is that containers have their place in delivering optimal capabilities but only for the right application.

To gain some insight into where containers play best, CCS Insight, commissioned by Red Hat, conducted a research study in January and February 2021. The goal was to understand the development, deployment and use of container applications and services. One of the top uses for container deployments was to simplify the integration and consistency of internal systems and components.

In fact, many of the top usage scenarios were as expected, such as providing autoscaling services for existing solutions and enabling the sharing and reuse of resources across an organisation. And although containers were being used for e-commerce services as you might expect, given their scaling needs in the wider market, containers are not always the chosen technology for modern app builders.

Undoubtedly, containers and Kubernetes offer many operational benefits that place them at the heart of modern application development strategies. Their ability to provide a consistent and immutable scaling model, regardless of the technology stack, highlights the productivity benefits on offer and the scope for some level of portability.

Adoption of the technology is growing, with the rise in cloud-native and cloud-first strategies as the primary focus for new application development and deployments. In another CCS Insight survey in mid-2020 that questioned IT leaders about their investment plans, 42% of 736 respondents had opted for a cloud-native or cloud-first approach. However, the same survey also revealed that only 10% had made a container-first model their top priority.

The reality is that containers, and in particular the Kubernetes container orchestration platform, have proved to be a challenging technology to navigate, implement and administer. There are many facets to containers and their management that must be addressed.

CCS Insights survey for Red Hat reflects many of the challenges that face the implementation of any new technology, such as a lack of skills and training, and not knowing where best to implement.

CCS Insights study differs from other similar public surveys because the respondent profile featured a more experienced set of technical skills operating with progressive processes and IT systems. Respondents maturity in DevOps and cloud development and deployment was notably high, as was their mix of deployment platforms.

Those embarking on a container strategy should take note of this maturity and the way pioneers have invested in education and training, allowing them to draw on a broad range of skills and technologies.

The immutable nature of container-based services, which can be deleted and redeployed when a new update is available, highlights the flexibility and scale they present. But while containers may come and go, there will be critical data that must remain accessible and with relevant controls applied.

For the growing number of developers embracing the container model, physical computer storage facilities can no longer be someone elses concern, for example. Developers will need to become involved in provisioning storage assets with containers. Being adept with modern data storage, as well as the physical storage layer, is vital to data-driven organisations.

Bola Rotibi is a research director at CCS Insight

Read more:
Containers: Learning from the pioneers - ComputerWeekly.com

Young people are hungry for good sex education. I found a program in Mexico that gets it right – The Conversation AU

More than 30,000 people have signed a petition, launched by ex-Sydney school girl Chanel Contos, demanding for consent to be at the forefront of sexual education in schools. The text in the petition states:

Those who have signed this petition have done so because they are sad and angry that they did not receive an adequate education regarding what amounts to sexual assault and what to do when it happens.

The petition encouraged a growing number of harrowing testimonies from young women throughout Australia about their experiences of sexual assault at parties.

School principals, particularly in all-boys schools, have responded by acknowledging the need for a cultural shift. Some schools have gathered students for sessions about consent, others addressed the topic in the classroom, some have asked parents to engage their children in discussions about sexual consent and social norms.

But studies show one-off conversations or education sessions about consent and rape are unlikely to influence long-term change. Interventions need to systematically and gradually address the harmful social norms that underpin a host of interrelated issues including rape culture, intimate partner violence and homophobic bullying.

I evaluated a sexuality education program in Mexico City. My evaluation highlighted a number of factors that can help shift harmful beliefs and behaviours related to gender, sexuality and relationships.

Evidence from around the globe suggests that to transform the harmful gender norms that contribute to violence and sexual assault, programs should promote critical reflections about gender, relationships and sexuality. Evidence also shows such reflection takes time.

Read more: Let's make it mandatory to teach respectful relationships in every Australian school

A community-based organisation providing sexual and reproductive health services throughout Mexico adapted their sexuality course in 2016. It was a 20 hour course, delivered weekly over one semester to 185 students in one school. Each group of 20 participants aged 14 to 17 had one facilitator.

The facilitators in the course were young people (under 30 years of age). They were trained as professional health educators, and to facilitate activities that promote critical reflection among students about entrenched beliefs and social norms.

Such conversations can be about things like the nature of love and behaviours that are good and bad in a relationship.

In the program, students engaged in debates about romantic jealousy, and whether it was a sign of love. One student told me:

they told us [] about what is love and what is not love. I told my boyfriend, they told us that jealousy is bad, and he replied, thats right, because it means a lack of trust, and in this way, we sometimes talked about the course.

Vignettes that were relevant to the students lived experiences stimulated debates about gender roles and social norms. For example, student said:

One of the things my classmate said stayed with me. He said that the man has to work and the woman should stay in the house. It made me, like, think. I think that a woman doesnt need to always be at home [] as if it were a prison. I think you need to give freedom to both people in a relationship.

These group conversations can be challenging. They may also be upsetting to participants, and could even provoke verbal harassment or violence.

One facilitator described bullying and violence during some sessions of the course.

The group started to verbally attack each other, and it was one corner of the room against the other.

This means facilitators need training not only on the concepts of gender, sexuality and relationships, but also on how best to directly address comments that may reinforce harmful gender norms or other types of violence in the classroom and use those as teaching moments to highlight the consequences of harmful social norms.

I saw the students become more comfortable talking about relationships and sexuality as the course progressed. One young man said:

before the course, it made us a bit embarrassed to talk about sexual and reproductive health. But afterwards we understood, with the course, that it was, like, very natural to talk about it. Its like any other thing, and so I now feel fine talking about it.

As a result of the program, some students said they directly addressed negative behaviours in their own relationships. And some even left controlling relationships.

One student said:

You know the information they told us about relationships? I was thinking about that, and then I decided to talk to my girlfriend about her controlling behaviour.

The students also developed trust in the course facilitators over time. One young man said:

As time passed, they gave me confidence that if at any moment I need something I can ask them for help, it wont be a problem.

The facilitators made referrals to health care, provided advice and support, and in one case accompanied a participant to obtain care.

In Australia, the quality and extent of implementation of sexual education is often left up to individual teachers or schools. But many teachers called on to deliver sexuality education feel unprepared to go beyond factual biological instruction.

A government mandate as seen in a handful of countries such as the United Kingdom, Germany and the Netherlands is needed to ensure high quality sexuality education is delivered to all young people in Australia.

Read more: Relationships and sex education is now mandatory in English schools Australia should do the same

But even when mandated, implementation at a national scale is challenging. To effectively deliver such programs, resources should be put towards developing a large cohort of health educators who are trained and supported to deliver quality sexual education.

A nation-wide program could be implemented through a partnership between national and state governments and community-based organisations already experienced with sexuality education.

As shown in the quotes above, the young people in the Mexico City course discussed topics from their sexuality course with peers, partners and parents.

This suggests that, even if parents feel unprepared to educate their children about sexual health, sexuality education can provide a bridge to open and reflective conversations. These can be a two-way exchange so parents need not serve as the educator, and can themselves benefit along with their children.

Read more: Not as simple as 'no means no': what young people need to know about consent

My research on prevention programming, as well as reviews of school-based interventions more broadly, reinforces the centrality of schools, both as settings in which violence is perpetrated, and as a site for its prevention.

Schools are often heteronormative institutions and can perpetuate toxic masculinity and rape culture. Investing in good quality sexual education can prevent the downstream effects we are seeing now in the testimonials about sexual assault in schools and in the national parliament.

Originally posted here:
Young people are hungry for good sex education. I found a program in Mexico that gets it right - The Conversation AU

To infinity and beyond: Linux and open-source goes to Mars – ZDNet

Perseverance hit Mars' atmosphere at almost 12,000 miles per hour (19,312 kilometers per hour) and a mere seven minutes later NASA landed its latest Mars rover softly and safely.Onboard the one-ton mobile science lab is its tiny flying companion, the drone helicopter Ingenuity. If all goes well, the four-pound (1.8 kilograms) Ingenuity will be the first vehicle to ever fly on another world. At 11-light minutes from Earth, no one will fly the dual-propped Ingenuity with a drone controller. Instead, it will fly itself using a combination of Linux and a NASA-built program based on the Jet Propulsion Laboratory's (JPL) open-source F (pronounced F prime) framework.

This will be no easy task. No one has ever tried to fly on Mars, which has an atmosphere only one-hundredth of the density of Earth's air. True, Mars also has only a third of Earth's gravity, but still, Ingenuity's engineers will be pleased as punch just to get Ingenuity off the ground.

Indeed, Ingenuity is purely a technology demonstration. It's not designed to support the Perseverance mission, which is searching for signs of ancient life and collecting rock and dirt samples for later missions to return to Earth. Its mission is to show that it's possible to fly on Mars using commercial off-the-shelf (COTS) hardware and open-source software.

In an IEEE Spectrum interview, Timothy Canham, a JPL Embedded Flight Software Engineer, explained the helicopter's processor board is powered by a Qualcomm Snapdragon 801 running at 500 Hz, not MegaHertz, Hertz. While that may sound painfully slow and old, it's far faster than Perseverance's processors. That's because NASA-grade CPUs and chips must meet NASA's High-Performance Spaceflight Computing (HPSC) radiation standards. These customized processors take years of design work and testing before they're certified for spaceflight. For instance, NASA's newest general-purpose processor is an ARM A53 variant you may know from the Raspberry Pi 3. Ingenuity, however, as a demo project can use a more ordinary, and thus a more modern, CPU.

In fact, Canham explained, "we literally ordered parts from SparkFun [Electronics]. This is commercial hardware, but we'll test it, and if it works well, we'll use it."

As for the software, Canham said,

This the first time we'll be flying Linux on Mars. We're actually running on a Linux operating system. The software framework that we're using is one that we developed at JPL for CubeSats and instruments, and we open-sourced it a few years ago. So, you can get the software framework that's flying on the Mars helicopter, and use it on your own project. It's kind of an open-source victory because we're flying an open-source operating system and an open-source flight software framework and flying commercial parts that you can buy off the shelf if you wanted to do this yourself someday.

That open-source software is F. It's a component-driven framework that enables rapid development and deployment of spaceflight and other embedded software applications. F has been successfully deployed on several space applications many times before. It is tailored but not limited to small-scale spaceflight systems such as CubeSats, SmallSats, and, now, a self-flying helicopter.

It includes:

There are, of course, many other open-source NASA programs. There are more than500 NASA Open Source 3.0 license software projects. Long before the concepts of free software and open-source had been articulated, NASA shared much of its code freely under the COSMIC program.

NASA has long used Linux both on the International Space Station (ISS). Linux's path to supercomputer domination started at NASA's Goddard Space Flight Center (GSFC) with the first Beowulf supercomputer.

Like Ingenuity the first Beowulf cluster was built with COTS equipment. It was built using 16 Intel 486DX processors and 10Mbps Ethernet for the bus for only a few thousand dollars. While its speed was only in single-digit gigaflops, Beowulf showed you could build supercomputers on a shoestring budget and Linux. Now, Ingenuity is showing that great things can still come from inexpensive hardware paired with Linux and open-source software.

Related Stories:

More here:

To infinity and beyond: Linux and open-source goes to Mars - ZDNet

DARPA starts a 5G open-source stack project with the Linux Foundation – RCR Wireless News

The Defense Advanced Research Projects Agency has begun a broad collaboration with the Linux Foundation, hoping to spur open-source development of technologies for use by the U.S. government that include secure 5G network software and applications.

The US GOV OPS (Open Programmable, Secure) umbrella organizations first project, OPS-5G, will focus on a software stack for 5G, the network edge and IoT. According to a newly established website about the project, OPS-5G will define and test an end-to-end 5G stack and include elements from multiple Linux Foundation projects, including LF Networking, LF Edge, Zephyr Project and Cloud Native Computing Foundation, along with other top-tier projects that call the Linux Foundation home.

The project formation encourages ecosystem players to support U.S. government initiatives to create the latest in technology software, according to DARPA and the Linux Foundation. According to the two organizations, OPS-5Gs goal is to create open source software and systems enabling secure end to end 5G and follow-on mobile networks and address feature velocity in open-source software, mitigate security concerns such as large-scale botnets that leverage IoT devices, network slicing on suspect gear and adaptive adversaries operating at scale.

Mike Woster, head of ecosystems at theLinuxFoundation, said that the Linux Foundations breadth of projects means that between existing open source projects and new ones that may be initiated under the US GOV OPS umbrella, it will be possible to stitch together a full, 5G end-to-end 5G reference architecture. The umbrella project also gives DARPA a place to push the results of its research and development into open-source collaborations. The overall goal is to accelerate 5G software development ranging from specific applications to network feature support, and orchestration and analytics, by borrowing from and building upon existing open-source projects and new ones.

The US GOV OPS project will launch as a standard open source project, with a charter similar to other projects within the Linux Foundation; which already is home to a number of projects related to Open RAN, edge computing, Kubernetes and others that will enable US GOV OPS to build on a secure code base for use by the U.S. government, according to a release.

But Its more than just code, said Woster, head of ecosystems at theLinuxFoundation. Open source development and open development is really around having a neutral governance framework; open, transparent development processes; that its secure, that the intellectual property is properly managed and that the velocity for developers that all of that matches the needs of the developers.

DARPAs use of open source software in the Open Programmable Secure 5G (OPS-5G) program leverages transparency, portability and open access inherent in this distribution model, said Dr. Jonathan Smith, program manager for DARPAs information innovation office, in a statement. Transparency enables advanced software tools and systems to be applied to the code base, while portability and open access will result in decoupling hardware and software ecosystems, enabling innovations by more entities across more technology areas.

The Linux Foundation 5G project is just one of the ways that the U.S. Department of Defense is supporting or exploring the use of 5G. Carriers are deploying 5G at military bases to test various use cases, and earlier this week, Federated Wireless announced that it is leading a project to use 5G in CBRS spectrum to modernize operations at a Marine Corps warehouse in Albany, Georgia. DARPA has also supported research into ad hoc spectrum sharing with its three-year Spectrum Collaboration Challenge. Some of the work from SC2 has informed DARPAs continued support of research into the possibility of more granular CBRS sharing.

Related Posts

See original here:

DARPA starts a 5G open-source stack project with the Linux Foundation - RCR Wireless News