Sarasota mom kicked out of school board meeting says her first amendment rights were violated – WTSP.com

According to Melissa Bakondy, she was ordered and escorted out of the April 19 meeting by school police for allegedly wanting to verbally attack a board member.

SARASOTA, Fla A Sarasota mom has accused the county's school board of trampling on her first amendment rights. The incident happened during the public comments section of a meeting last week.

According to Melissa Bakondy, she was ordered and escorted out of the April 19 meeting by school police for allegedly wanting to verbally attack a board member. The mom said she was preempted during her preamble as she was making a point going into what she came to say at the meeting.

She said she was interrupted by School Board Chair Jane Goodwin, who prevented her from speaking when the microphone was cut off and she was subsequently kicked out.

"I was actually going to say what Shirley Brown is quoted on a video on a hot microphone from the school board workshop last week," Bakondy said. "It was a sexual comment, inappropriate joke, and as a public school official, making policy for our children, I don't think it is appropriate for these individuals to be able to talk like that and represent our school district."

Bakondy added, "Any dissenting information is cut off as threatening abusive, a personal attack and right then and there [Goodwin] cuts off the mic she doesn't want to hear about it."

Bakondy, who has frequently attended school board meetings, said Goodwin's action of cutting microphones off during public comments amounted to censorship of parents. During the exchange, Goodwin asked Bakondy if she had children in the district, a question against which another board member, Bridget Zeigler spoke up.

"That is not appropriate. You don't get to ask people who come to a public meeting whether they have children or not. Period. You are way out of line," Zeigler said.

After being escorted from the podium, police asked Bakondy to leave the meeting room. When the board members returned from a five-minute recess, one board member briefly addressed the incident and why Bakondy was kicked out.

"A comment that a board member made on a hot mic is not associated with Agenda 35, so you know, who's right or who's wrong or what's the point of order. I would think that we have to do a better job of making sure that our public speakers are sticking close to the comments," said Tom Edwards, Sarasota School Board.

According to a statement from the school district, Goodwin's action "falls within the scope of School Board policy."

"As a district, we will follow policy as written by the School Board of Sarasota County," wrote Kelsey Whealy, communications and community relations specialist and spokesperson for the school district.

In the district's policy document on School Board Governance and Organization, Chapter 2.22, Section 6 A, which covers the public comments section, states that "All statements must be directed to the Chair."

The policy also states that the chair may also interrupt, warn or terminate a person's statement if it is lengthy, abusive, threatening, defamatory, obscene, or irrelevant to the business of the meeting.

In addition, items that are not on the agenda are allotted discussion time at the back end of the public comments session. Bakondy was kicked out after she made a direct reference to another board member, Shirley Brown, on a subject matter that was not specifically on the agenda.

After the law enforcement officers involved in the incident were misidentified as deputies, Sarasota Sheriff Kurt Hoffman weighed in on what happened.

"I do not condone tax-paying citizens being silenced. Your sheriff's office was not involved in this very unfortunate incident," Hoffman said.

Bakondy said she was not actively threatening anyone and maintained that her first amendment right was directly violated by the school board's action.

She added that the board has also placed several restrictions and changed practices at the meeting such as changing the camera angle, cutting down the individual comment time from three minutes to two minutes, and limiting public comments to one hour from two hours.

She added that parents have the right to petition and hold public officials accountable and Goodwin was being heavy-handed and quick to hit the off button.

"I feel like I was censored and cut off before I could even say anything. She said that my mic was cut off because of what I was about to say. So if you are cutting mic off because of what you think I am about to say you have violated my first amendment rights," she said.

"They are elected officials, we don't have to be nice to them. Not that I want to be mean, but sometimes it takes a little political theater to get attention to the issues," Bakondy said.

View post:

Sarasota mom kicked out of school board meeting says her first amendment rights were violated - WTSP.com

Can Open Source Licensing Be Applied to Data? – Above the Law

Open source software has permeated the technology world in nearly every industry over the past decade. In fact, a majority of organizations today are using open source software in at least some capacity.

Open source software is software that users can access in source code form, without limitations on the scope of its use or whether it can be modified or redistributed in the future. There are numerous open source software licenses, each with their own compliance requirements.

The success of the open source model as applied to software has made many question if the same ideas could be applied in the area of data, especially as business across all sectors has become increasingly data-driven.

To date, however, attempts to apply the open source paradigm to the realm of data have been both problematic and unsuccessful. To learn why, register for PLIs upcoming one-hour program, Beyond Open Data: The Only Good License is No License.

The session will cover a number of the most pressing topics relevant to the open source data question, including:

Register today to learn more about one of the biggest evolving issues in tech today.

The rest is here:

Can Open Source Licensing Be Applied to Data? - Above the Law

The problems with Elon Musks plan to open-source the Twitter algorithm – MIT Technology Review

For example, Melanie Dawes, chief executive of Ofcom, which regulates social media in the UK, has said that social media platforms will have to explain how their code works. And the European Unions recently passed Digital Services Act, agreed on April 23, will likewise compel platforms to offer more transparency. In the US, Democratic senators introduced proposals for an Algorithmic Accountability Act in February 2022. Their goal is to bring new transparency and oversight of the algorithms that govern our timelines and news feeds, and much else besides.

Allowing Twitters algorithm to be visible to others, and adaptable by competitors, theoretically means someone could just copy Twitters source code and release a rebranded version. Large parts of the internet run on open-source softwaremost famously OpenSSL, a security toolkit used by large parts of the web, which in 2014 suffered a major security breach.

There are even examples of open-source social networks already. Mastodon, a microblogging platform that was set up after concerns about the dominant position of Twitter, allows users to inspect its code, which is posted on the software repository GitHub.

But seeing the code behind an algorithm doesnt necessarily tell you how it works, and it certainly doesnt give the average person much insight into the business structures and processes that go into its creation.

Its a bit like trying to understand ancient creatures with genetic material alone, says Jonathan Gray, a senior lecturer in critical infrastructure studies at Kings College London. It tells us more than nothing, but it would be a stretch to say we know about how they live.

Theres also not one single algorithm that controls Twitter. Some of them will determine what people see on their timelines in terms of trends, or content, or suggested follows, says Catherine Flick, who researches computing and social responsibility at De Montfort University in the UK. The algorithms people will primarily be interested in are the ones controlling what content appears in users timelines, but even that wont be hugely useful without the training data.

Most of the time when people talk about algorithmic accountability these days, we recognize that the algorithms themselves arent necessarily what we want to seewhat we really want is information about how they were developed, says Jennifer Cobbe, a postdoctoral research associate at the University of Cambridge. Thats in large part because of concerns that AI algorithms can perpetuate the human biases in data used to train them. Who develops algorithms, and what data they use, can make a meaningful difference to the results they spit out.

For Cobbe, the risks outweigh the potential benefits. The computer code doesnt give us any insight into how algorithms were trained or tested, what factors or considerations went into them, or what sorts of things were prioritized in the process, so open-sourcing it may not make a meaningful difference to transparency at Twitter. Meanwhile, it could introduce some significant security risks.

Companies often publish impact assessments that probe and test their data protection systems to highlight weaknesses and flaws. When theyre discovered, they get fixed, but data is often redacted to prevent security risks. Open-sourcing Twitters algorithms would make the entire code base of the website accessible to all, potentially allowing bad actors to pore over the software and find vulnerabilities to exploit.

I dont believe for a moment that Elon Musk is looking at open-sourcing all the infrastructure and security side of Twitter, says Eerke Boiten, a professor of cybersecurity at De Montfort University.

Read the original:

The problems with Elon Musks plan to open-source the Twitter algorithm - MIT Technology Review

SonarSource raises $412M to scan codebases for bugs and vulnerabilities – TechCrunch

Maintaining source code is one of the toughest challenges that software developers face. In a 2020 survey from Sourcegraph, 51% of developers said that they have more than 100 times the volume of code they had 10 years ago while 92% say the pressure to release software faster has increased. The growing responsibilities can lead to poor-quality code slipping into production environments, increasing costs. One report estimates the impact of buggy software at $2.84 trillion per year.

Products have emerged over the years to address the problem of code maintenance, including the cloud-based code quality management service SonarSource. SonarSource, whose technology detects reliability and vulnerability issues in code, today announced that it raised $412 million in a funding round co-led by Advent International and General Catalyst at a $4.7 billion valuation.

Organizations across all industries have long understood that software is critical to running their businesses. Recently, theyve begun to realize and recognize that source code is the key component of their software source code dictates how software will behave and also perform and as such must receive good care, SonarSource CEO Olivier Gaudin told TechCrunch via email. SonarSource enables companies to improve the quality of their source code.

Gaudin says he launched SonarSource to enable developers to administer best code quality practices that, in theory, could help to fix problematic code. Its an acute problem. An alarming report from Veracode and Enterprise Strategy Group found that nearly half of organizations knowingly ship vulnerable code despite using cybersecurity tools, often to meet release deadlines. A separate survey from Veracode suggests that the majority of software library flaws 92% can be fixed via an update, but that 79% of the time, developers never update libraries after theyre added to a codebase for fear of breaking functionality.

Gaudin has a financial industry background, having worked at JP Morgan as a developer and Deutsche Bank as a software team leader before co-founding SonarSource. Freddy Mallet, SonarSources second co-founder, was a project architect at E-Trade and CTO at agtech startup Hortis. Third co-founder Simon Brandhof also worked at Hortis and was a lead developer at online trading platform CPR Online.

One of the code analysis dashboards in SonarQube. Image Credits: SonarSource

SonarSource was created to accommodate the markets eventual realization that software and its source code is the foundation of business and must be stewarded as such, Gaudin said. From the beginning, SonarSources mission has been to empower every single developer and thus every organization to build software right.

SonarSource was incorporated in 2008, and one of its first products was the open source program SonarQube. Designed to perform static code analysis i.e., debugging by examining a programs code without actually executing the program SonarQube embeds clean code into the development process, supporting programming languages including Python, Java, C# and JavaScript.

In 2010, SonarSources open source project hit a milestone of over 2,000 downloads per month. The startup sought to capitalize on its success with View, a commercial plugin for project portfolio management. After releasing more plugins and software including SonarCloud (which analyzes open source projects) and SonarLint (an integrated developer environment extension for static analysis), SonarSource expanded the scope of its analyzers to cover standards that encompass maintainability, reliability and security.

Many competitors focus on just one part of delivering clean code, such as the security aspect. Thats a promise to a risk or compliance department, Gaudin said. SonarSource has a different approach were going to help the engineering team do a better job delivering code and help them invest the time they spend actually writing new code, as opposed to debugging old code. We provide a solution that allows these departments to raise their game and deliver better code. More time is spent on innovation and solving difficult problems for the organization.

SonarSource competes with a number of companies in the static code analysis software market, which one firm predicts could be worth $1.74 billion by the end of 2026 (up from $643 million in 2022). For example, r2c and DeepSource focus on code analysis for security and performance, while ShiftLeft attempts to automatically patch any code vulnerabilities that it finds.

All static code analysis products have downsides. They cant support every programming language, sometimes produce false positives and negatives and can provide a false sense of security. Theyre only as good as the rules theyre using to scan with, after all which is why they arent likely to replace quality assurance teams anytime soon.

SonarSource doesnt claim to have overcome these. To the extent that it has them, the companys advantages are a head start and strong industry traction. SonarSource grew its commercial customer base by more than 2,000% over the last four years to more than 16,000 organizations. Over 300,000 organizations including 80 Fortune 100 companies, meanwhile, use a mix of the companys commercial and free products.

Image Credits: SonarSource

SonarSources gross margin profile is above 90% and annual recurring revenue stands at $175 million, which the company projects will reach $240 million this year. SonarSource plans to expand its headcount from 290 employees to north of 400 to meet that goal, according to Gaudin.

SonarSource will use [the latest] investment to double its sales force in 2022 and grow its marketing team across existing offices in Geneva, Switzerland; Annecy, France; Bochum, Germany and Austin, Texas In addition, SonarSource will open a new regional headquarters in Singapore, allowing the company to build its business within the burgeoning Asia-Pacific market, Gaudin added. Many competitors focus on just one part of delivering clean code, such as the security aspect. Thats a promise to a risk or compliance department. SonarSource has a different approach were going to help the engineering team do a better job delivering code and help them invest the time they spend actually writing new code, as opposed to debugging old code.

Insight Partners and Permira also participated in SonarSources latest financing round.

Go here to see the original:

SonarSource raises $412M to scan codebases for bugs and vulnerabilities - TechCrunch

What do developers do all day long? The answer may surprise you – and annoy them – ZDNet

Written by Steve Ranger, Editorial director, ZDNet Steve RangerEditorial director, ZDNet

Steve Ranger is the editorial director of ZDNet. An award-winning journalist, Steve writes about the intersection of technology, business and culture, and regularly appears on TV and radio discussing tech issues.

You might think that the job titles 'software developer' or 'coder' are self-explanatory, but thanks to a variety of distractions and unexpected demands, developing software and writing code often come quite far down the to-do list for many.

A survey has found that, on average, software engineers have only about 10 hours a week of "deep work" time, thanks to the distractions and frustrations they face during the day.

"Junior engineers have a lot more of this time on average -- in fact, 20% more than senior engineers -- likely because they've got less administrative overhead to deal with," the survey of 600 software engineers and managers by software tools company Retool found.

SEE: 'Striking a balance': How one company is rethinking the office for hybrid work

Both junior and senior developers said that testing changes -- writing tests or doing manual tests -- was the thing they wish they could spend less time on. Senior developers wished they could spend less time recruiting or interviewing prospective hires.

Among the time-consuming activities that developers dislike are technical issues like slow SQL queries and database syncs. Working out who is actually responsible for a particular piece of code can take hours, while developers also complained about "waiting on people", including waiting for code reviews or requirements.

Almost all of the engineers surveyed agreed that open-source code was at least "somewhat" essential to their day jobs. That's probably because many of them rely on it on a day-to-day basis: more than 80% of developers are actively pulling open-source code into their work at least once a month, while almost 50% are doing it at least once a week.

According to the research, developers are regularly re-using code when they can: nearly half (44%) said they copied and pasted up to 50 lines of code a week from other sources, while a third (33%) said they copied somewhere between 50 and 100 lines a week; 13% said they copied 100 to 500 lines a week.

"In 2022, the vast majority of software engineers are running other people's code. They're building on top of open-source libraries, or re-using code from other parts of their company's codebase or from online tutorials," the survey said.

See original here:

What do developers do all day long? The answer may surprise you - and annoy them - ZDNet

Spotify dances to the open source beat – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Let theOSS Enterprise newsletterguide youropensourcejourney!Sign up here.

Just about every technology company under the sun wants to align themselves with the open source sphere, whether its Facebook open-sourcing its own internal projects or Microsoft doling out north of $7 billion to acquire one of the biggest platforms for open source developers GitHub.

Spotify is no different. The music-streaming giant has open-sourced a number of its projects through the years, such as Backstage, which was recently accepted as an incubating project at the Cloud Native Computing Foundation (CNCF) after two years as an open source project. The company also recently joined the Open Source Security Foundation, opened a dedicated open source program office, and is now launching a fund to support independent open source projects.

In short, Spotify is doubling down on its open source efforts.

There are many reasons why a company might choose to open source its internal technologies, or contribute to those maintained by other companies or individuals. For starters, it can help engage the broader software development community and serves as a useful recruitment tool. A company may also contribute resources to community-driven projects where it plays a central part of their critical infrastructure, to help bolster security, for example.

Backstage, for its part, is all about building customized developer portals, unifying a companys myriad tooling, services, apps, data, and documents in a single interface through which they can access their cloud providers console, troubleshoot Kubernetes, and find all the documentation they need as part of their day-to-day work.

The problem Backstage solves is complexity the kind of everyday complexity that can really bog engineers and their teams down, which then slows your whole organization down, Tyson Singer, Spotifys head of technology and platforms, told VentureBeat. Backstage as a product and as a platform is really about creating a better experience for engineers streamlining their workflows, making it easier to share knowledge, and getting the messy parts of infrastructure out of their way. It enables them to better focus on building business value innovative products and features.

Today, Backstage is used by dozens of companies, spanning retail, gaming, finance, transport, and more, including Netflix, American Airlines, IKEA, Splunk, HP, Expedia, and Peleton. But when all is said and done, what does Spotify get from open-sourcing Backstage? Well for starters, it gets a better version of Backstage for itself due to the community-driven nature of the project.

Lets imagine the counterfactual, where two years ago we didnt open source Backstage, and instead we poured the same amount of internal resources into it as we have gotten from the external community and based on the tremendous community engagement so far, that would have been a huge investment and tricky to fund it still would not be as good a product as it is today, Singer explained. A diversity of viewpoints and use-cases, from adopting companies like the worlds biggest airline or fast-growing finance startup, individual contributors and third-party software providers, has improved the product, making it more robust and enabling the platform to keep up with the pace of change going on both inside and outside a particular company.

But on top of that, the fact that Backstage is seeing adoption at some of the worlds biggest companies indirectly benefits Spotify too, insofar as it ensures that its own product is among the de facto developer portal tools.

If we had not open-sourced [Backstage], wed be the only ones using and depending on Backstage, Singer continued. If eventually a different open source solution emerged, we would have had to migrate to that solution, as the community-fed innovation eclipsed our ability to keep pace.

To support its ongoing open source efforts, Spotify has joined a long legion of companies to launch a dedicated open source program office (OSPO), designed to bring formality and order to their open source initiatives, align OSS project goals with key business objectives, manage license and compliance issues, and more.

Spotify has, in fact, had an OSPO of sorts for the better part of a decade already, but it constituted more of an informal group of employees who had other full-time roles at the company. But as of this year, the company now has a full-time OSPO lead in Per Ploug and is actively hiring for other roles.

So up to now, Spotifys open source work has been driven chiefly by the passion and engagement of the companys engineering teams, according to Singer.

The enthusiasm has always been there, and we just needed to channel it, Singer said. A dedicated OSPO brings more clarity to this process for everyone, including what expectations are, and what kind of support should be expected. It ensures that our efforts are properly prioritized and integrated into the way we work. We want to treat it [open source] with the same level of ownership and dedication as we do with our internal applications creating a formal OSPO allows us to do that.

Spotifys OSPO is positioned within the companys platform strategy unit however, it will ultimately straddle multiple teams and departments given that open source software intersects with everyone from engineering and security, to legal, HR, and beyond.

Engineering teams have their areas of expertise but we want our OSPO to go wide across multiple teams, Singer said. The best position to do that is from within our platform strategy organization, which is the connective tissue between various R&D teams. It gives the OSPO visibility and independent positioning within that framework. It very well represents how intertwined open source is with ways of working not only in Spotify, but actually in any modern technology company.

A central component of any OSPO is security ensuring that any open source component in the companys tech stack is safe and kept up-to-date with the latest version. So, its perhaps timely that Spotify recently joined the Open Source Security Foundation (OpenSSF), a pan-industry initiative launched by the Linux Foundation nearly two years ago to bolster the software supply chain.

With incumbent members such as Google, Microsoft, and JPMorgan Chase, Spotify is in good company, and its decision to join followed the critical Log4j security bugthat came to light late last year. The OpenSSF also highlights how open source has emerged as the accepted model for cross-company collaboration everyone benefits from more secure software, so it makes sense if everyone pitches in together.

Open source security is a topic that affects every tech company or, really, any company that relies on software, Singer said. We all depend on the open source ecosystem, which is why as a technical community, we all have a responsibility to improve security where possible. As when we joined others in creating the Mobile Native Foundation, we see the problem as one of scale how do you create solutions that can affect, not just local problems, but an entire landscape? We believe that participating in foundations working together with other big companies who think about the problems and opportunities of scale within their businesses every day makes a lot of sense for finding scalable solutions.

To further align itself with the open source realm, Spotify today lifted the lid on a new fund for independent (i.e., not Kubernetes) open source project maintainers. The Spotify FOSS Fund will start out at 100,000 ($109,000 USD), with the companys engineers selecting projects they feel are most deserving of the funds, and a separate committee making the final decision. The first tranche of chosen projects will be announced some time in May.

The idea for Spotifys FOSS Fund came about by asking ourselves what we could do to help support the quality of open source code that we all depend upon?, Singer said. Its only natural for the larger tech players to play a role in supporting the open source ecosystem. We use it, we contribute to it, were building projects for others to contribute to and depend upon we feel its important and necessary for us to contribute to the success of this community.

However, 100,000 isnt a huge amount of money in the grand scheme of things. Over the past year, weve seen Google pledge $100 million to support foundations such as OpenSSF and commit $1 million to a Linux Foundation open source security program. Recently, Google also partnered with Microsoft to fund another security program called the Alpha-Omega Project to the initial tune of $5 million.

But its perhaps unfair to compare supporting foundations and larger projects with smaller-scale indie projects that receive no financial backing whatsoever. Plus, it is still early days for the Spotify FOSS Fund, and its likely to evolve.

The fund will start with 100,000 the keyword being start, Singer explained. Were ready and willing to grow the fund, but were using this initial amount to help us evaluate what kind of impact we can make. Funds will be distributed to ensure the maintainers have the financial means to continue maintaining their projects, fix security vulnerabilities, and continue improving the codebase. We will target projects that are independent, actively maintained, and relevant to our work here at Spotify.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Read more:

Spotify dances to the open source beat - VentureBeat

What to expect at Red Hat Summit 2022: Join theCUBE for live coverage May 10-11 – SiliconANGLE News

Open source has altered a great deal of the IT landscape, and is transforming the hybrid cloud as well.

This is one of the fundamental premises behind Red Hat Inc.s strategy for a hybrid cloud future, one that will be shaped by innovation generated from the open-source ecosystem.

Red Hats vision for the open hybrid cloud and new business opportunities will be among the topics discussed as part of coverage by theCUBE, SiliconANGLEs livestreaming studio, during the Red Hat Summit, May 10-11.

Red Hat Enterprise Linuxpaved the way for businesses to embrace open-source tools, while bridging production applications between on-premises and the cloud. For Red Hat, the open-source ecosystem provides innovation and a mutual language between datacenter servers and microservices running in multiple public clouds. This is the essence of the open hybrid cloud.

Hybrid cloud, multicloud, the ability to cross multiple clouds that just wouldnt happen out of one company, said Paul Cormier, chief executive officer of Red Hat, during an eWeekinterview in March. Thats the beauty of the open-source model, the best innovation wins. The platform is here now hybrid multicloud. This open hybrid cloud platform is whats going to drive the innovation around it.

Red Hat Summit will offer an opportunity to hear how open-source code has shifted from a programmers hobby to a production necessity in the modern enterprise, as theCUBE will interview company executives, partners and industry experts. (* Disclosure below.)

Red Hats focus on the hybrid platform has meshed with parent company IBMs overall business strategy. In IBMs most recent quarterly earnings report, the company added 200 hybrid cloud customers and saw Red Hats revenue increase 21% for the quarter year-over-year.

Red Hats OpenShift platform is a key element in IBMs multicloud strategy. As recently acknowledged by Tom Rosamilia, senior vice president of IBM software, OpenShifts role in containerizing IBMs CloudPak has enabled customers to pursue a wide range of hybrid solutions.

This flexibility will become even more important as enterprises move increasingly to the edge. Red Hat is an active participant in this shift, positioning its operating system to maximize support for edge environments.

Last year, Red Hat added new features for its RHEL 8.4 version that can connect applications across edge deployments using the open hybrid cloud. RHELs new features were designed to simplify Kubernetes deployments using OpenShift in resource-limited environments.

We see edge coming to life over the long term because it is focused on bringing compute closer to producers and data consumers, said Chris Wright, senior vice president and chief technology officer at Red Hat. Edge will continue to grow and so will our computing habits. Edge will emerge as a prominent location where a significant amount of computing happens.

A prominent use case for edge deployment can currently be found in the telecommunications world. An evaluation recently published by ABI Research noted that Red Hat and VMware Inc. have emerged as the two largest market leaders in 5G telco cloud-native platforms.

Red Hats Ansible Automation platform and Advanced Cluster Management for Kubernetes were seen as attractive cloud-native options for telcos in 5G deployment, according to the research report.

It used to be hardware and software in the 4G and 3G days, Cormier said. Now with 5G, thats all software all the way out to the cell tower.

As the industry begins to embrace new technologies at the edge, major telcos are turning to open-source vendors such as Red Hat for solutions. In recent weeks, Red Hat has shared additional details about a new collaboration with Verizon Inc.

The latest insights from Red Hat reveal an effort to create a public cloud experience at scale, at the edge. Using OpenShift and cluster node management through a MachineSet solution, Red Hat and Verizon can deploy 5G technologies in edge compute zones with minimal complexity.

Red Hat is following a similar strategy with Trk Telekom, Turkeys integrated telecom operator. OpenShift is being deployed to develop and scale cloud-native applications from core to edge at Trk Telekom, targeting management of customer usage, AI-powered infrastructure automation, and network reporting. It is an example of Red Hats progress towards the kind of open hybrid cloud model envisioned by the company.

Applications are changing the world, and we need to manage our own and our customers needs in a digital way, said Mehmet Fatih Bekin, Data Center and Cloud Services Director at Trk Telekom, in a recent interview. We realized that vanilla Kubernetes didnt meet our needs in areas like configuration and security best practices. Red Hat OpenShift frees up our staff to focus on making positive contributions to the business. And with its open-source development model, Red Hat can deliver platform innovation and fixes faster.

Red Hat Summit is a virtual event, with additional interviews to be broadcasted on theCUBE. You can register for free here to access the live event. Plus, you can watch theCUBEs event coverage here on-demand after the live event.

We offer you various ways to watch the live coverage of Red Hat Summit, including theCUBEs dedicated website and YouTube channel. You can also get all the coverage from this years events on SiliconANGLE.

SiliconANGLE also has analyst deep dives in our Breaking Analysis podcast, available on iTunes, Stitcher, and Spotify.

Stay tuned for the complete list of speakers.

(* Disclosure: TheCUBE is a paid media partner for the Red Hat Summit. Neither Red Hat Inc., the sponsor of theCUBEs event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Original post:

What to expect at Red Hat Summit 2022: Join theCUBE for live coverage May 10-11 - SiliconANGLE News

The Ins and Outs of Secure Infrastructure as Code – DARKReading

Infrastructure as code (IaC) has become a core part of many organizations' IT practices, with adoption of technologies like HashiCorp's Terraform and AWS CloudFormation increasing rapidly. The move to IaC sees companies moving away from either manually configuring servers or using imperative scripting languages to automate those changes and toward a model in which declarative code is used to outline a resource's preferred final state.

As with any change in approach to IT, there are security considerations to understand. The move to IaC presents some risksalong with opportunities to improve the way companies secure their environments. Given IaC's key role in configuring the security parameters of an organization's systems and the speed at which a flawed template could be rolled out across a large number of systems, ensuring that good security practices are adhered to is vital to making the best use of this technology.

IaC Security Risks and OpportunitiesWith the move to IaC, there are new security risks to consider. The first is secrets management. When creating and managing resources, credentials will often be needed to authenticate to remote systems; when IaC code is written to automate these tasks, there is a risk that credentials or API keys may be hard-coded into the code. Care should be taken to ensure that proper secrets management processes are followed to avoid this. Secrets should be held in a secure location, such as a cloud key management service (KMS), and retrieved on demand by scripts as they run.

A second risk is that misconfigurations may creep into the IaC templates for example, if code is copy/pasted in from an external source and then propagate throughout an environment quickly as the IaC is used. Avoiding this risk requires both automated and manual review, as with any other source code.

The opportunity inherent in moving to IaC-driving environments is that once all of your infrastructure is defined in code, it's possible to apply common automated linters and review tools to it to ensure that good practices are followed. Tooling can draw from common libraries of good practice and be supplemented with custom rules that apply organization-specific practices.

Additionally, with an IaC-based approach, all configurations should be stored in version- controlled source code repositories. This provides improved tracking of changes so that companies can track modifications over time and also ensure appropriate access control and that auditing is in place.

Lastly, IaC-based deployment means that test environments should be able to effectively mirror production, meaning that security testing can be safely conducted with higher confidence that any results will be meaningful in production.

IaC Technology StacksThere are a variety of options for IaC. Typically, large organizations will use many of these at the same time, as different tools have different strengths and weaknesses.

Terraform from HashiCorp is one of the most widely used IaC toolsets. It has the advantage of being open source and not tied to any one cloud platform or infrastructure provider, meaning that it works across a range of environments.

Unsurprisingly, the major cloud service providers also have IaC toolsets that focus on their clouds. Amazon's CloudFormation, Microsoft's ARM and Bicep, and Google's Cloud Deployment Manager all provide a means for users of that company's cloud to take advantage of the IaC paradigm.

Another popular option for cloud-native IaC is Pulumi, which allows developers to use programming languages they already know (e.g., JavaScript or Golang) to write their IaC templates.

IaC Review ToolsThere are a number of open source tools that can help with the process of security reviews of IaC code. These tools take a similar approach in providing a rule set of common security misconfigurations for a given set of IaC languages. In addition to the main IaC format, some of these tools will review other formats, like Kubernetes manifests and Dockerfiles. Some of the commonly used tools in this arena include the following:

Smoothing the Security PathThe move to IaC is well underway at a variety of organizations. While it does bring challenges, the process if well handled can fundamentally improve organizations' overall security posture by allowing all of their system configurations to be held in version-controlled source code repositories and regularly checked for misconfigurations.

Given the power of IaC, it is vital that its adoption be accompanied by strong security practices, with scanning and validation key to those processes. By using open source review tools like the ones mentioned above, companies can help to smooth their path in adopting this technology.

See original here:

The Ins and Outs of Secure Infrastructure as Code - DARKReading

Iterative to Launch Open Source Tool, First to Train Machine Learning Models on Any Cloud Using HashiCorp’s Terraform Solution – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Iterative, the MLOps company dedicated to streamlining the workflow of data scientists and machine learning (ML) engineers, today announced a new open source compute orchestration tool using Terraform, a solution by HashiCorp, Inc., the leader in multi-cloud infrastructure automation software.

Terraform Provider Iterative (TPI) is the first product on HashiCorps Terraform technology stack to simplify ML training on any cloud while helping infrastructure and ML teams to save significant time and money in maintaining and configuring their training resources.

Built on Terraform by HashiCorp, an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services, TPI allows data scientists to deploy workloads without having to figure out the infrastructure.

Data scientists oftentimes need a lot of computational resources when training ML models. This may include expensive GPU instances that need to be provisioned for training an experiment and then de-provisioned to save on costs. Terraform helps teams to specify and manage compute resources. TPI complements Terraform with additional functionality, customized for machine learning use cases:

With TPI, data scientists only need to configure the resources they need once and are able to deploy anywhere and everywhere in minutes. Once it is configured as part of an ML model experiment pipeline, users can deploy on AWS, GCP, Azure, on-prem, or with Kubernetes.

"We chose Terraform as the de facto standard for defining the infrastructure-as-code approach, said Dmitry Petrov, co-founder and CEO of Iterative. TPI extends Terraform to fit with machine learning workloads and use cases. It can handle spot instance recovery and lets ML jobs continue running on another instance when one is terminated."

To learn more about TPI visit the blog.

About IterativeIterative.ai, the company behind Iterative Studio and popular open-source tools DVC and CML, enables data science teams to build models faster and collaborate better with data-centric machine learning tools. Iteratives developer-first approach to MLOps delivers model reproducibility, governance, and automation across the ML lifecycle, all integrated tightly with software development workflows. Iterative is a remote-first company, backed by True Ventures, Afore Capital, and 468 Capital. For more information, visit Iterative.ai.

Visit link:

Iterative to Launch Open Source Tool, First to Train Machine Learning Models on Any Cloud Using HashiCorp's Terraform Solution - Business Wire

NSF-funded project aims to mitigate malware and viruses by making them easily understandable – ASU News Now

April 25, 2022

As the software development landscape evolves, new security vulnerabilities are surfacing. Traditionally, a softwares source code could shed light on its vulnerabilities, but acquiring high-quality source code for the purpose of finding weaknesses can be difficult because of compiling.

Compiling refers to the process of transforming and optimizing a programs source code to generate a final executable, which is a file that causes a computer to perform indicated tasks according to the encoded instructions. While an executable performs well and runs quickly on computers, it no longer has any information about the original source code. Assistant Professor Ruoyu (Fish) Wang has received National Science Foundation recognition and financial support for his work to mitigate the effects of malware and computer viruses by making them easily understandable. The research results may enable analysts and researchers to uncover source code in a manner that identifies vulnerabilities. Photo by Erika Gronek/ASU Download Full Image

Today, more and more software is developed in high-level programming languages, such as C++, Go and Rust, because of their many advantages, including higher development speed and better software engineering practices. Most importantly, programs written in high-level languages are compiled into machine code, the elemental language of computers, and will execute on computers at what is known as native speed. Executing at native speed allows for the fastest results.

Unfortunately, cybercriminals have also joined the transition to high-level programming, meaning a growing number of computer viruses and malware are programmed using these languages. And existing techniques do not allow security analysts and researchers to uncover malevolent source code with satisfactory quality.

However, existing techniques do not allow security analysts and researchers to uncover source code with satisfactory quality.

Ruoyu (Fish) Wang, an assistant professor of computer science and engineering in the Ira A. Fulton Schools of Engineering at Arizona State University since 2018, is addressing this security concern with a 2022 National Science Foundation Faculty Early Career Development Program (CAREER) Award by discovering new techniques for recovering source code, a process known as decompilation.

My project will develop a set of generic, automated decompilation techniques that transform these viruses and malware samples into accurate, concise and human-readable source code, Wang says. As an added benefit, this project will enable software hardening and vulnerability mitigation without accessing the high-level language source code of software, which will help improve the security portfolio in scenarios where legacy software is in use.

Researchers have worked on binary decompilation for more than 25 years, yet a critical problem that continues to hinder progress is the lack of a clear metric to evaluate the output quality.

A fundamental problem, as I see it, is that decompilation can lead to many different end goals, such as software behavior analysis, vulnerability discovery, generic hardening, patching and recompilation, Wang says. These goals may have vastly different requirements on various aspects of the output.

Along with his students and colleagues in the School of Computing and Augmented Intelligence, one of the seven Fulton Schools, Wang will first develop a set of objectives under each end goal, then create standardized metrics for evaluating the quality of decompilation output.

Guided by these metrics, we will develop novel techniques that will transform machine code into a high-level intermediate language known as angr IL, or AIL, Wang says. With different end goals, we may have different focuses or make different compromises during code transformation.

The development of a new decompiler for each high-level programming language can be tedious and expensive. With that in mind, Wang and his team will aim to automatically generate programming-language-specific decompilation transformation rules by using a novel technique called Compiler Transformation Inference and Inversion, or CTII.

We will use the latest progress in the fields of natural language processing and evolutionary computation to assist with the generation of these transformation rules, Wang says. We will open source all research artifacts under this award. The foundation of our research, angr and angr decompiler, are already available on GitHub.

Wangs research will take place in ASUsLaboratory of Security Engineering for Future Computing, known as SEFCOM. Wang credits the skilled reputations of his SEFCOM colleagues Assistant ProfessorYan Shoshitaishvili, Associate ProfessorAdam Doupand Assistant ProfessorTiffany Bao all of whom are computer science and engineering faculty in the School of Computing and Augmented Intelligence, as one of the reasons his project received NSF funding.

Our team is well known in the computer security community for conducting open, usable and reproducible research in binary analysis, Wang says. I like to work with fun and awesome people who share similar ideologies, and I firmly believe that modern systems research is only possible via a coordinated team effort. My colleagues and I form a great team at SEFCOM and ASU, and I do not see any possibility to enjoy the same level of productivity through teamwork anywhere else.

Read the original post:

NSF-funded project aims to mitigate malware and viruses by making them easily understandable - ASU News Now