Labor mulls over the future of persecuted whistleblower and his lawyer, Collaery – Echonetdaily

Barrister Bernard Collaery is defending whistleblower, Witness K, for exposing government crimes. Photo ustly.info

While jailed Australian citizen and journalist, Julian Assange, waits for the new Labor government to act on his behalf with US extradition orders from the UK, another whistleblower and his lawyer face court behind closed doors on Australian soil.

After the federal government, under Liberal PM John Howard, allegedly bugged East Timors cabinet rooms during the 2004 bilateral negotiations over the Timor Sea gas and oil Treaty, intelligence officer, Witness K, blew the whistle on the secret operation. Yet since 2018, he and his lawyer, well respected ACT barrister, Bernard Collaery, have been persecuted in court, behind closed doors, by the Liberal-Nationals government.

The Echo asked recently re-elected local Labor MP Justine Elliot, whether Labor is prepared to back whistleblowers and transparency now it is in government, and whether it will drop the persecution of Bernard Collaery and Witness K.

Ms Elliot replied, Before the federal election, Mark Dreyfus, as the Shadow Attorney General, consistently questioned elements of theCollaery prosecution. In opposition, Labor made a commitment that, if elected, we would seek urgent briefings on that matter.

Mark Dreyfus as Attorney General in the Albanese Labor Government has now sought and received those briefings. This case was one of the first issues he raised in his new role, and he is currently considering that information.

According to federal government transcripts (Hansard), Independent MP, Andrew Wilkie, told parliament on June 28, 2018, that, The perpetrator [of the bugging] was the Howard government, although the Rudd, Gillard and Abbott governments are co-conspirators, after the fact.

Wilkie describes the bugging as illegal and unscrupulous, and it was done after Australia withdrew from the International Court of Justice and the International Tribunal for the Law of the Sea. Such bodies would have provided transparent negotiations with Timor-Leste for its oil and gas reserves.

Australian fossil fuel company, Woodside, benefited from the closed door deal.

Wilkie said at the time, In effect, Foreign Minister, Alexander Downer, and by implication Australia, one of the richest countries in the world, forced East Timor, the poorest country in Asia, to sign a treaty which stopped them obtaining their fair share of the oil and gas revenues, and thats simply unconscionable.

More:

Labor mulls over the future of persecuted whistleblower and his lawyer, Collaery - Echonetdaily

Taking away the Statue of Liberty: the week’s morning news conferences – Mexico News Daily

President Lpez Obrador had some banking to do last weekend, opening five branches of the state-owned Banco del Bienestar (Bank of Well-Being) in Morelos, Mexico City and Mxico state. At one inauguration the president donned a Hawaiian-style flower garland, but stopped short of a hula dance.

Monday

The president was in party mood on Monday. He congratulated basketball player Juan Jos Toscano-Anderson for being the first Mexican to win the NBA finals and couldnt hide his feelings about another recent victor. Today were going to listen to cumbia. Im really happy about Gustavo Petros triumph, the president said, referring to the music genre from Colombia and the same countrys newly elected left wing leader.

However, the presidents mood soured on the subject of government collusion with organized crime. That doesnt exist in our government. Its so absent that, to attack us, they make it up they started making up that the government and I had links to organized crime. They cant prove anything because we simply have principles and ideals theres no evidence, he insisted.

The tabasqueos winning smile returned at the mention of Petros victory, for which he took some credit. We started a new era in the resurgence of democratic movements with a social dimension in America, he said of his government, before calling for music: a song by Margarita la Diosa de la Cumbia (Margarita the Goddess of Cumbia), a Colombian-Mexican singer.

In relieving news for viewers, the head of the consumer protection agency Profeco assured that, despite fears, there was no shortage of toilet paper in Mexico. Another point in the countrys favor, the president explained, was the safety of its capital: Mexico City is safer than New York and is one of the safest cities in the world, he assured.

Tuesday

In the health update, the head of the Mexican Social Security Institute (IMSS), Zo Robledo, addressed the countrys shortage of doctors. He said that no applicant had attended an interview for 78% of the 13,765 advertised posts, which appeal for specialists in poorly served, remote areas.

The well-being of Mexicans outside the country was of equal concern to the president. We are ensuring that there is no mistreatment and that there is no discrimination [against migrants in the U.S.] we are not going to allow any candidate, any party, for electoral purposes in the United States to use Mexicans as a piata. The time of silence is over, because there are very racist groups that used xenophobia, the hatred of foreigners, to get votes, he said.

Another foreigner is unlikely to be offered the red carpet on arrival to the land of the free: investigative journalist Julian Assanges extradition to the U.S. was approved by U.K. authorities. The president reiterated his objection to Assanges imprisonment. He is a prisoner of conscience. He is being unjustly treated. His crime was to denounce serious violations of human rights and the interference of the United States government in the internal affairs of other countries He is the worlds best journalist of our time This is shameful, he insisted.

What about freedoms? Are we going to remove the Statue of Liberty in New York? Im going to ask President Biden to address this issue, the president added of the Assange case, before repeating his asylum offer to the journalist.

Wednesday

The president expressed his condolences for the two Jesuit priests murdered in the Sierra Tarahumara, the rural Chihuahua home of the Rarmuri people. He added both men were around 80 and had been trying to save another man.

Elizabeth Garca Vilchis, the governments media monitor, called foul in her Whos who in the lies of the week section. She said the head of a civil organization invented a rape case involving National Guardsmen and insisted social media had been manipulated to promote a narco narrative of the president collaborating with cartels.

As the song says, sometimes customs are stronger than love, Lpez Obrador said of snail-like reforms of the state oil company Pemex, borrowing words from the much cherished singer Juan Gabriel. He stole another line later in the conference, this time from a 19th century Russian critic, to offer a view on Spanish-Peruvian writer Mario Vargas Llosa, an adversary of the president. Do you think he can write anything worth reading? Imagination and talent are lost when someone gives themselves entirely to lying, he said, in a barb directed at Vargas Llosa.

Thursday

Shorty, The Egg, The Accountant, The Cowboy, The Cow and The Devil were the colorful names of choice of some of Mexicos recently arrested criminals, the deputy security minister said. In a plot fitting of a Western, Ricardo Meja Berdeja added that 5 million pesos (US $250,000) was being offered in the search for a man by the name of El Chueco (The Crooked) for the murder of the two priests in Chihuahua.

The president relayed Pope Francis reaction to the killings. I express my sorrow and dismay at the murder in Mexico of two Jesuits How many murders in Mexico! Violence does not solve problems, but only increases unnecessary suffering, the pontifex maximus Tweet read.

We totally agree, because there are still those who think that violence must be confronted with violence, evil with evil, Lpez Obrador added.

Later in the conference, the president insisted that his adversaries were wrong to challenge his security strategy. They are defending a failed strategy. They want us to use the full force of the state an eye for an eye, a tooth for a tooth. That is the essence of conservative thought, that is what led Caldern to declare war, he said, referring to a former president.

At the close, the president revealed an inspiration for his philosophy on law and order. With serenity and patience, as Kalimn would say, he said, in reference to the 1960s Mexican comic hero, an Indian orphan found abandoned in a river the story goes dedicated to fighting for justice.

Friday

The president reviewed the work of the Financial Times, The Washington Post, The Wall Street Journal and The New York Times on Friday, which he accused of being very quiet about the extradition of Assange. He had further advice for the European Union, whose parliament condemned violence against Mexican journalists in a March resolution. The European Union is accusing us of not respecting freedom of expression and saying that journalists are persecuted And now with Julian Assange, not a pronouncement, nothing, silence. They act as subordinates to the groups of economic power and political power, he said.

The tabasqueo also had some constructive criticism for the U.S. Democratic Party later in the conference. He blamed Senator Bob Menendez for preventing the leaders of Cuba, Nicaragua and Venezuela from attending the Summit of the Americas earlier this month and criticized other Democratic lawmakers for blocking Bidens infrastructure bill. President Biden is a good politician and a good human being, but theyre all very abusive, they totally take advantage hopefully they support the president and support their party. They are leading their party to failure, Lpez Obrador warned Democrats.

Mexico News Daily

Read more here:

Taking away the Statue of Liberty: the week's morning news conferences - Mexico News Daily

GitHub Makes Copilot Available to the Public for $10/month, Free for Students and Open Source Project Maintainers – WP Tavern

GitHub has announced that Copilot, its new AI pair programming assistant, is now available to developers for $10/month or $100/year. Verified students and maintainers of open source projects will have free access to Copilot. The assistant is available as an extension for popular code editors, including Neovim, JetBrains IDEs, Visual Studio, and Visual Studio Code.

Copilot was trained on billions of lines of public code in order to offer real-time code suggestions inside the editor. GitHub claims it is capable of suggesting complete methods, boilerplate code, whole unit tests, and complex algorithms.

With GitHub Copilot, for the first time in the history of software, AI can be broadly harnessed by developers to write and complete code, GitHub CEO Thomas Dohmke said. Just like the rise of compilers and open source, we believe AI-assisted coding will fundamentally change the nature of software development, giving developers a new tool to write code easier and faster so they can be happier in their lives.

Despite its many claims to improve developer efficiency, Copilot is still a controversial tool. Opponents object to the tools creators training the AI on open source code hosted on GitHub, generating code without attribution, and then charging users monthly to use Copilot. It has also been criticized for producing insecure code and copying large chunks of code verbatim.

Evan after 12 months in technical preview, Copilot remains generally polarizing at its public launch. Developers either seem to be impressed by its capabilities or offended by its ethical ambiguities. GitHub had more than 1.2 million developers in its technical preview and reports that those who started using Copilot quickly found it an indispensable part of their daily workflows.

In files where its enabled, nearly 40% of code is being written by GitHub Copilot in popular coding languages, like Pythonand we expect that to increase, Dohmke said. Thats creating more time and space for developers to focus on solving bigger problems and building even better software.

See the original post:

GitHub Makes Copilot Available to the Public for $10/month, Free for Students and Open Source Project Maintainers - WP Tavern

Visual Studio adds ability to edit code in All-in-One Search – The Register

Microsoft has added the ability to edit code while in Visual Studio's All-In-One Search user interface.

The feature is included in Visual Studio 2022 17.3 Preview 2 and follows changes to search functionality in the development suite. At the start of the year, Microsoft introduced indexed Find in Files to speed up the already rapid searching (compared to Visual Studio 2019 at any rate).

The indexed Find in Files fired up a ServiceHub.IndexingService.exe process on solution load or folder open which scraped through the files to construct an index. Worries that the indexer would slug performance like certain other Microsoft indexing services were alleviated somewhat by the use of Below Normal operating system priority.

In April, with Visual Studio 17.2 Preview 3, a new All-In-One search experience turned up, which merged both the existing Visual Studio Search and Go To functionality into an unhelpful pop-up window in the IDE.

It's fair to say the idea was not universally well received as users requested that the Visual Studio team "stop wasting [their] time with searches," remarked that they "don't understand what is wrong with the current search," and wondered "if the VS team works on anything other than adding new searches."

Slightly ominously, one user said that they "would prefer if Visual Studio just follows how VS Code has implemented its search."

Visual Studio Code is the open-source elephant in the room, considerably less weighed down by the requirements of its sibling's legacy. VS Code regularly tops the charts of favored developer tools, with the full-fat Visual Studio trailing behind. As one user noted last month: "I will open VS only for the good old Winforms designer"

So what to do? Keep on adding stuff to Search, of course! The new feature is intended to allow developers to edit code directly in the search window via the familiar editor experience (think IntelliSense and Quick Actions). One can configure the code/results arrangement to be vertical or horizontal or simply turn off the code preview altogether.

The new search experience remains a preview and must be enabled via Tools > Options > Environment > Preview Features. Having taken the functionality for a spin, we can confirm it works at described and was positively handy when it came to dealing with massive solutions. However, we doubt it will do much to stop developers jumping ship for something a bit less bloated when presented with the opportunity.

More:

Visual Studio adds ability to edit code in All-in-One Search - The Register

Academic, Industry Leaders Form OpenFold AI Research Consortium to Develop Open Source Software Tools To Understand Biological Systems and Discover…

DAVIS, Calif.--(BUSINESS WIRE)--A set of leading academic and industry partners are announcing the formation of OpenFold, a non-profit artificial intelligence (AI) research consortium of organizations whose goal is to develop free and open source software tools for biology and drug discovery. OpenFold is a project of the Open Molecular Software Foundation (OMSF), a non-profit organization advancing molecular sciences by building communities for open source research software development.

OpenFolds founding members are the Columbia University Laboratory of Mohammed AlQuraishi, Ph.D., Arzeda, Cyrus Biotechnology, Genentechs Prescient Design, and Outpace Bio. The consortium, whose membership is open to other organizations, is hosted by OMSF and supported by Amazon Web Services (AWS) as part of the AWS Open Data Sponsorship Program. OMSF also hosts OpenFreeEnergy and OpenForceField.

Brian Weitzner, Ph.D. Associate Director of Computational and Structural Biology at Outpace and a co-founder of OpenFold said, In biology, structure and function are inextricably linked, so a deep understanding of structure is required to elucidate molecular mechanisms and engineer biological systems. We believe that open collaboration and access to powerful AI-powered structural biology tools will transform biotechnology and biosciences by empowering researchers and educators spanning life science companies, tech companies and academia with free access to use and extend these tools to accelerate discovery and develop life-changing technologies.

The first major research area for the consortium is to create state-of-the-art AI-based protein modeling tools which can predict molecular structures with atomic accuracy. The OpenFold consortium is modeled after pre-competitive technology industry open source consortia such as Linux and OpenAI.

First consortium-released AI model to predict protein structure yielding impressive results

The OpenFold founders also officially announced today the full release of its first protein structure prediction AI model developed in Dr. AlQuraishis laboratory, first publicly acknowledged on Twitter on June 22, 2022. The model is based on groundbreaking work at Google DeepMind and the University of Washingtons Institute for Protein Design. The software is available under a free and open source license from The Apache Software Foundation at https://github.com/aqlaboratory/openfold. Training data can be found on the Registry of Open Data on AWS. A formal preprint and publication will be forthcoming.

Yih-En Andrew Ban, Ph.D., VP Computing at Arzeda and co-founder of OpenFold said, This first OpenFold AI model is already producing highly accurate predictions of protein crystal structures as benchmarked on the Continuous Automated Model EvaluatiOn (CAMEO), and has yielded on-average higher accuracy and faster runtimes than DeepMinds AlphaFold2. An example output from OpenFold, with comparison to experimental data, is included in the figure.

CAMEO is a project developed by the protein structure prediction scientific community to evaluate the accuracy and reliability of predictions.

Lucas Nivon, Ph.D., CEO at Cyrus and co-founder of OpenFold, said, The first release of the OpenFold software includes not just inference code and model parameters but full training code, a complete package that has not been released by another entity in the space. It will allow a full set of derivative models to be trained for specialized uses in drug discovery of biologics, small molecules, and other modalities.

Researchers around the world will be able to use, improve, and contribute to what the consortium founders describe as their predictive molecular microscope. Current and future work will extend these derivative models to integrate with other software in the field and to be more useful for protein design and biologics drug discovery specifically.

Richard Bonneau, Ph.D., Executive Director at Genentechs Prescient Design said, OpenFold is many things to us, a code, a forum, a set of great minds to discuss our favorite topics! It has been a wonderful experience so far, and we are really excited to build out the next stages of the roadmap!

Multiple other corporate and non-profit organizations are currently joining the OpenFold consortium as full members, and the founders invite biotech, pharma, technology and other research organizations to join. The consortium is currently evaluating proposals for new AI protein projects from academic groups around the world.

About OpenFold

OpenFold is a non-profit artificial intelligence (AI) research consortium of academic and industry partners whose goal is to develop free and open-source software tools for biology and drug discovery, hosted as a project of the Open Molecular Software Foundation. For more information please visit: OpenFold Consortium

View post:

Academic, Industry Leaders Form OpenFold AI Research Consortium to Develop Open Source Software Tools To Understand Biological Systems and Discover...

Boycott 7-Zip Because It’s Not On Github; Seriously? – PC Perspective

There is a campaign on Reddit that is gaining some traction calling for a boycott of the software because it is not a true Scotsman truly Open Source. The objection raised is that 7-Zip is not present on Github, Gitlab, nor any public code hosting and therefore is not actually Open Source. The fact that those sites do not appear at all in the Open Source Initiatives official definition of open source software doesnt seem to dissuade those calling for the boycott whatsoever.

Indeed you can find the source code for 7-Zip on Sourceforge, an arguably much easier site to deal with than the Gits, and it is indeed licensed under the GNU Lesser GPL. That indeed would be considered as qualifying as open source software, with the use of the LGPL likely being because 7-Zip includes the unRAR library to be able to unzip RAR files and that requires a license from RARLAB.

Their evidence of the lack of 7-Zips openness is based on comments from a 12 year old Reddit thread and the fact that sometimes there are security vulnerabilities in the software. As The Register points out, the existence of the Nanazip fork of 7-Zip and the fact that 7-Zip has no problems with it is much stronger that the software is indeed open source.

You can find a link to the thread in the article, if you want to participate in one of the internets current pointless arguments.

Read the original post:

Boycott 7-Zip Because It's Not On Github; Seriously? - PC Perspective

It’s a Race to Secure the Software Supply Chain Have You Already Stumbled? – DARKReading

The digital world is ever-increasing in complexity and interconnectedness, and that's nowhere more apparent than in software supply chains. Our ability to build upon other software components means we innovate faster and build better products and services for everyone. But our dependence on third-party software and open source increases the complexity of how we must defend digital infrastructure.

Our recent survey of cybersecurity professionals found one-third of respondents monitor less than 75% of their attack surface, and almost 20% believe that over half of their attack surface is unknown or not observable. Log4Shell, Kaseya, and SolarWinds exposed how these statistics can manifest as devastating breaches with wide-reaching consequences. Cybercriminals already know supply chains are highly vulnerable to exploitation.

Last year, a threat actor exploited a vulnerability in Virtual System Administrator (VSA) provider Kaseya to inject REvil ransomware into code for VSA. Kaseya supported thousands of managed service providers (MSPs) and enterprises, and its breach compromised a critical network within thousands of organizations. Consequently, these organizations' internal systems were also compromised.

The ripple effect that Kaseya had on its customers can happen to any organization that uses a third-party software vendor. The European Union Agency for Cybersecurity (ENISA) analyzed 24 recent software supply chain attacks and concluded that strong security protection is no longer enough. The report found supply chain attacks increased in number and sophistication in 2020, continued in 2021, and, based on recent attacks by Lapsus$, is likely to carry over through 2022.

Similar to third-party software vendors but at an even-greater magnitude, open source code has a devastating impact on digital function if left insecure the havoc wreaked by Log4Shell illustrates this. These consequences are partly because open source software remains foundational to nearly all modern digital infrastructure and every software supply chain. The average application uses more than 500 open source components. Yet limited resources, training, and time available for the maintainers who voluntarily support projects mean they struggle to remediate the vulnerabilities. These factors have likely contributed to high-risk open source vulnerabilities remaining in code for years.

This issue demands immediate action. That's why the National Institute of Standards and Technology (NIST) released its security guidelines in February. But why are we still so slow to try and secure the software supply chain effectively? Because it's tough to know where to start. It's challenging to keep up with security updates for your own software and new products, let alone police other vendors to ensure they match your organization's standards. To add more complexity, many of the open source components that underpin digital infrastructure lack the proper resources for project maintainers to keep these components fully secure.

So, how do we secure it? It all looks pretty daunting, but here's where you can start.

First, get your house in order and identify your attack resistance gap the space between what organizations can defend and what they need to defend. Know your supply chain and implement strategies that set teams up for success:

Then, enforce your strategies and standards to maintain security for your organization and the collective security of the Internet:

Most in the cybersecurity community are familiar with Murphy's Law: "Everything that can go wrong, will" it defines the mindset of anyone working in this field. And if my experience in this industry has taught me anything, you just have to do your best to keep up with the inevitable increase in challenges, risks, and complexity of securing digital assets. Part of staying ahead of these challenges is remaining highly proactive when it comes to your security best practices, and if you haven't properly secured your software supply chain yet, you're already behind. But even if you've had a false start, the good news is that it's never too late to get back up.

Read more here:

It's a Race to Secure the Software Supply Chain Have You Already Stumbled? - DARKReading

Developing a Cloud-Native Application on Microsoft Azure Using Open Source Technologies – InfoQ.com

Key Takeaways

Cloud native is a development approach that improves building, maintainability, scalability, and deployment of applications. My intention with this article is to explain, in a pragmatic way, how to build, deploy, run, and monitor a simple cloud-native application on Microsoft Azure using open-source technologies.

This article demonstrates how to build a cloud-native application replicating real-case scenarios through a demo applicationguiding the reader step by step.

Without any doubt, one of the latest trends in software development is the cloud native term. But exactly, what is a cloud-native application?

Cloud-native applications are applications built around various cloud technologies or services hosted in the (private/public) cloud. Cloud native is a development approach that improves building, maintainability, scalability, and deployment of applications. Often, they are distributed systems (commonly designed as microservices) but they also use DevOps methodologies to automate application building and deployment that can be done at any time on demand. Usually these applications provide API through standard protocols such as REST or gRPC and they are interoperable through standard tools, such as Swagger (OpenAPI).

The demo app is quite simple, but at the same time it involves a number of factors and technologies combined to replicate the fundamentals of a real-case scenario. This demo doesnt include anything about authentication and authorization because, in my opinion, it would bring too much complexity that is not required to explain the topic of this article.

Simple Second-Hand Store application

Simple Second-Hand Store (SSHS) is the name of the demo app described in this article.

SSHS system design overview.

SSHS is a simple cloud-native application that sells second-hand products. Users can create, read, update, and delete products. When a product is added to the platform, the owner of that product receives an email confirming the operations success.

When developing a microservice architecture, decomposing the business requirements into a set of services is actually the first step. A set of these principles are:

These simple principles help to build consistent and robust applications, gaining all the advantages that a distributed system can provide. Keep in mind that designing and developing distributed applications is not an easy task, and ignoring a few rules could lead to both monolithic and microservices issues. The next section explains by examples how to put them in practice.

It is easy to identify two contexts in the Simple Second-Hand Store (SSHS) application: the first one is in charge of handling productscreation and persistence. The second context is all about notification and is actually stateless.

Coming down to the application design, there are two microservices:

At a higher level, microservices could be considered as a group of subsystems that are composing a single application. And, as per traditional applications, components need to communicate with other components. In a monolithic application you can do it by adding some abstraction between different layers, but of course it is not possible in a microservice architecture since the code base is not the same. So how can microservices communicate? The easiest way to do it is through HTTP protocol: each service exposes some REST APIs for the other one and they can easily communicate, but although it may sound good at the first impression some dependencies are put in the system. For example, if service A needs to call service B to reply to the client, what happens if service B is down or just slow? Why does service Bs performances affect service A, spreading the outage to the entire application?

This is where asynchronous communication patterns come into playto help keep components loosely coupled. Using asynchronous patterns, the caller doesnt need to wait for a response from the receiver, but instead, it throws a fire and forget event, and then someone will catch this event to perform some action. I used the word someone because the caller has no idea who is going to read the eventmaybe no one will catch it.

This pattern is generally called pub/sub, where a service publishes events and others may subscribe to events. Events are generally published on another component called event bus that works like a FIFO (first in-first out) queue.

Of course, there are more sophisticated patterns other than the FIFO queue, even if it is still used a lot in real environments. For example, an alternative scenario may have consumers subscribing to a topic rather than a queue, copying and consuming messages belonging to that topic and ignoring the rest. A topic is, generally speaking, a property of the message, such as the subject in AMQP (Advanced Message Queuing Protocol) terms.

Using asynchronous patterns, service B can react to some events occurring in service Abut service A doesnt know anything about who the consumers are and what they are doing. And obviously its performance is not affected by other services. They are completely independent from each other.

NOTE: Unfortunately, sometimes using an asynchronous pattern is not possible, and even if using synchronous communication is an antipattern, there is no alternative. This shall not become an excuse to build things quicker, but keep in mind that in some specific scenarios, it may happen. Do not feel that guilty if you have no alternatives.

In the SSHS application, microservices dont need to have direct communication since the Nts service must react to some events that occur on the Pc service. This can be clearly done as an asynchronous operation, through a message on a queue.

For the same reasons exposed in the Communication between microservices paragraph, to keep services independent from each other, a different storage for each service is required. It doesnt matter if a service has one or more storages using a multitude of technologies (often they have both SQL and NoSQL), each service must have the exclusive access to its repository; not only for performance reasons, but also for data integrity and normalization. The business domain between services could be very different, and each service needs its own database schema that could be very different from one microservice to another. On the other hand, the application is usually decomposed following business bounded context, and it is quite normal to see schemas diverge over time, even if at the beginning they may look the same. Summarizing, merging everything together leads to the monolithic application issuesso why use a distributed system?

Notifications service doesnt have any data to persist, while the ProductCatalog offers some CRUD APIs to manage uploaded products. These are persisted in a SQL database since the schema is well defined and the flexibility given by a NoSQL storage is not needed in this scenario.

Both services are ASP.NET applications running on .NET 6 that can be built and deployed using Continuous Integration (CI) and deployment techniques. In fact, GitHub hosts the repository and the build and deployment pipelines are scripted on top of GitHub Actions. Cloud infrastructure is scripted using a declarative approach, to provide a full IaC (Infrastructure as a Code) experience using Terraform. Pc service stores data on a Postgresql database and communicates with Nts service using a queue on an event bus. Sensitive data such as connection strings are stored in a secure place on Azure and not checked in the code repository.

Before starting: the following sections dont explain each step in detail (such as creating solutions and projects) and are aimed at developers that are familiar with Visual Studio or similar. However, the GitHub repository link is at the end of this post.

To get started with the SSHS development, first create the repository and define the folders structure. SSHS repository is defined as:

Just focus on few things for now:

NOTE: Disable the nullable flag in csproj file, usually enabled by default in Net Core 6 project templates.

ProductCatalog service needs to provide APIs to manage products, and to better support this scenario we use Swagger (Open API) to give some documentation to consumers and make the development experience easier.

Then there are dependencies: database and event bus. To get access to the database, it is going to use Entity Framework.

Finally, a secure storage serviceAzure KeyVaultis required to safely store connection strings.

The new ASP.NET Core 6 application Visual Studio templates dont provide a Startup class anymore, but instead, everything is in a Program class. Well, as discussed in the ProductCatalog deployment paragraph there is a bug about using this approach, so lets create a Startup class:

Then replace the Program.cs content with the following code:

The next step is about writing some simple CRUD APIs to manage the products. Heres the controller definition:

The ProductService definition is:

And finally, defines the (very simple) DTOs classes:

The Owner property should contain the email address to notify when a product is added to the system. I havent added any kind of validation since it is a huge topic not covered in this post.

Then, register the ProductService in the IoC container using services.AddScoped(); in the Startup class.

Often cloud-native applications use Open API to make APIs testing and documentation easier. The official definition is:

The OpenAPI Specification (OAS) defines a standard, language-agnostic interface to RESTful APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection. When properly defined, a consumer can understand and interact with the remote service with a minimal amount of implementation logic.

Long version short: OpenAPI is a nice UI to quickly consume APIs and read their documentation, perfect for development and testing environments, NOT for production. However, since this is a demo app, I kept it enabled in all the environments. In order to include an attention flag, I put some commented-out code to exclude it from the Production environment.

To add Open API support, install the Swashbuckle.AspNetCore NuGet package in the Pc project and update the Startup class:

Enable the XML documentation file generation in the csproj. These documentation files are read by Swagger and shown in UI:

NOTE: Add into the appsettings.json file a section named SwaggerApiInfo with two inner properties with a value of your choice: Name and Title.

Add some documentation to APIs, just like in the following example:

Now, run the application and navigate to localhost:/index.html. Here, you can see how Swagger UI shows all the details specified in the C# code documentation: APIs description, schemas of accepted types, status codes, supported media type, a sample request, and so on. This is extremely useful when working in a team.

Even though this is just an example, it is a good practice to add the GZip compression to APIs response in order to improve performance. Open the Startup class and add the following lines:

To handle errors, custom exceptions, and a custom middleware is used:

The Pc application needs to persist datathe productsin storage. Since the Product entity has a specific schema, a SQL database suits this case scenario. In particular, Postgresql is an open-source transactional database offered to as a PaaS service from Azure.

Entity Framework is an ORM, a tool that makes the object translation between SQL and the OOP language easier. Even if SSHS does perform very simple queries, the goal is to simulate a real scenario where ORMsand eventually MicroORMs, such as Dapperare heavily used.

Before starting, run a Postgresql local instance for the development environment. My advice is to use Dockerespecially for Windows users. Now, install Docker if you dont have it yet, and run docker run -p 127.0.0.1:5432:5432/tcp --name postgres -e POSTGRES_DB=product_catalog -e POSTGRES_USER=sqladmin -e POSTGRES_PASSWORD=Password1! -d postgres.

For more information, you can refer to the official documentation

Once the local database is running properly, it is time to get started with Entity Framework for Postgresql. Lets install these NuGet packages:

Define entitiesthe Product class:

Create a DbContext classthat will be the gateway to get access to the databaseand define the mapping rules between the SQL objects and the CLR objects:

The DbSet property represents as an in-memory collection that the data persisted on storage; the OnModelCreating method override scans the running assembly looking for all the classes that implement the IEntityTypeConfiguration interface to apply custom mapping. Instead, the OnConfiguring override enables the Entity Framework Proxy to lazy load relationships between tables. This isnt the case since we have a unique table, but it is a nice tip to improve performance in a real scenario. The feature is given by the NuGet package Microsoft.EntityFrameworkCore.Proxies.

Finally the ProductEntityConfiguration class defines some mapping rules:

*It is important to remind that the Guid is generated after the creation of the SQL object. If you need to generate the Guid before the SQL object you can use HiLomore info here.

Finally, update the Startup class with the latest changes:

The database connection string is sensitive information, so it shouldnt be stored in the appsettings.json file. For debugging purposes, UserSecrets can be used. It is a feature provided by the dotnet framework to store sensitive information that shouldnt be checked in the code repository. If you are using Visual Studio, right click on the project and select Manage user secrets; if you are using any other editor, open the terminal and navigate to the csproj file location. Then type dotnet user-secrets init. The csproj file now contains a UserSecretsId node with a Guid to identify the project secrets.

There are three different ways to set a secret now:

The secret.json should look as follow:

Lets get down with the ProductService implementation:

The next step is about creating the database schema through migrations. The Migrations tool incrementally updates a checked-in file database to keep it in sync with the application data model while preserving existing data. The details of the applied migrations to the database are stored in a table called "__EFMigrationHistory". This information is then used to execute not-applied migrations only to the database specified in the connection string.

To define the first migration, open the CLI in the csproj folder and run

dotnet-ef migrations add "InitialMigration"it is stored in Migration folder. Then update the database: dotnet-ef database update with the migration just created.

NOTE: If this is the first time you are going to run migrations, install the CLI tool first using dotnet tool install --global dotnet-ef.

As Ive said, user secrets just works in a Development environment so Azure KeyVault support must be added. Install the package Azure.Identity and edit the Program.cs:

where is the KeyVault name that will be declared in the Terraform scripts later.

The ASP.NET Core SDK offers libraries for reporting application health, through REST endpoints. Install the Microsoft.Extensions.Diagnostics.HealthChecks.EntityFrameworkCore NuGet package and configure the endpoints in the Startup class:

The code above adds two endpoints: at the /health/ping endpoint the application responses with the health status of the system. Default values are Healthy, Unhealthy or Degraded, but they can be customized. Instead, the /health/dbcontext endpoint gives back the current Entity Framework DbContext status, so basically, if the app can communicate with the database. Note that the NuGet package mentioned above is the one specific for Entity Framework, that internally refers to Microsoft.Extensions.Diagnostics.HealthChecks. If you dont use EF, you can use this one only.

You can get more info in the official documentation .

The last step to complete the Pc project is to add a Dockerfile file. Since Pc and Nts are independent projects, it is important to have a single Dockerfile per project. Create a folder Docker in the ProductCatalog project, define a .dockerignore file and the Dockerfile:

NOTE: Dont forget to add a .dockerignore file as well. On the internet, there are plenty of examples based on specific technologies.NET Core in this case.

NOTE: If your Docker build stucks on the dotnet restore command, you faced a bug documented here. To fix it, add this node to the csproj:

and add /p:IsDockerBuild=true to both restore and publish commands in the Dockerfile as explained in this comment.

To try this Dockerfile locally, navigate with your CLI to the project folder and run:

docker build -t productcatalog -f DockerDockerfile ., where:

Then run the image using:

docker run --name productcatalogapp -p 8080:80 -it productcatalog -e ConnectionStrings:ProductCatalogDbPgSqlConnection="Host=localhost;Port=5432;Username=sqladmin;Password=Password1!;Database=product_catalog;Include Error Detail=true":

NOTE: The docker run command starts your app, but it wont work correctly unless you create a docker network between the ProductCatalog and the Postgresql containers. However, you can try to load the Swagger web page to see if the app is at least started. More info here.

Go to http://localhost:8080/index.html and if everything is working locally move forward to the next step: the infrastructure definition.

Now that the code is written and properly running in a local environment, it can be deployed in a cloud environment. As mentioned earlier, the public cloud we are going to use is Microsoft Azure.

Azure App Service is a PaaS service able to run Docker containers, that suits best for this scenario. Azure Container Registry holds the Docker images ready to be pulled from the App Service. Then, an Azure KeyVault instance can store application secrets such as connection strings.

Other important resources are the database serverAzure Database for Postgresqland the Service Bus to allow asynchronous communication between the services.

To deploy Azure resources, no operation is required to be executed manually. Everything is writtenand versionedas a Terraform script, using declarative configuration files. The language used is HashiCorp Configuration Language (HCL), an agnostic-cloud language that allows you to work with different cloud providers using the same tool. No Azure Portal, CLI, ARM or Bicep files are used.

Before working with Terraform, just a couple of notes:

Terraform needs to store the status of the deployed resources in order to understand if a resource has been added, changed, or deleted. This state is saved in a file stored on the cloud (storage account for Azure, S3 for AWS, etc.). Since this is part of the Terraform configuration, it is not possible to do it through script but instead, it is the only one operation that must be done using other tools. The next sections explain how to set up the environment using az CLI to create the storage account and the IAM identity that actually runs the code.

NOTE: You can not use the same names I used because some of the resources require a unique name across the entire Azure cloud.

Create a resource group for the Terraform state

Every Azure resource must be in a resource group and it is a good practice to have a different resource group for each application/environment. In this case, I created a resource group to hold the Terraform state and another one to host all the productive resources.

To create the resource group, open the CLI (Cloud Shell, PowerShell, it is up to you) and type:

az group create --location --name sshsstates

Create a storage account for the Terraform state

Storage account is a resource that holds Azure Storage objects such as blobs, file shares, queues, tables, and so on. In this cases it will hold a blob container with the Terraform state file.

Create one by running:

where location is the location of the resource group created at the previous step.

Then, create the blob container in this storage account:

where sshsstg01 is the storage account name created at the previous step.

See more here:

Developing a Cloud-Native Application on Microsoft Azure Using Open Source Technologies - InfoQ.com

Timecho, Founded by the Creators of Apache IoTDB, Raises Over US$10 Million First Fund for Open-Source Software R&D and Time-Series Database Solutions…

STUTTGART, Germany and BEIJING, June 28, 2022 /PRNewswire/ -- Timecho, founded by the creators of Apache IoTDB, an IoT-native time series database, has announced a first round of funding of more than US$10 million led by Sequoia China and joined by Koalafund, Gobi China, and Cloudwise.

Time series database has thrived in recent years to meet new data management requirements for the Internet of Things (IoT) era. Developers are seeking a database that can optimize data ingestion rates, storage compression as well as query and analytics features that match the needs of their time-stamped data applications. Based on Apache IoTDB, a top-level open-source project ranked seventh by commits (individual changes to files) in 2021 among all Apache projects, Timecho is technically capable of delivering foundational infrastructure that powers demanding industrial use cases. Moreover, the Timecho team has over 40 patents and papers in industry-leading, time-series management journals including ICDE, SIGMOD, VLDB.

"Apart from its outstanding performance, Apache IoTDB is lightweight, easy-to-use, and deeply integrated with software in the big data ecosystem, including Apache PLC4X, Kafka, Spark, and Grafana," said Dr. Xiangdong Huang, Founder of Timecho and PMC Chair of Apache IoTDB. "It has been widely used by multiple enterprises in the power industry, public transportation, intelligent factories and other scenarios. To expand its application in the IoT area and strengthen its performance to meet the specific requirements of enterprises, Timecho will continue investing aggressively in R&D to enhance features such as end-edge-cloud collaboration."

"Timecho maintains intensive interaction with the open-source community," said Dr. Julian Feinauer, Timecho Technical Director for the European Market and PMC member of Apache IoTDB. "With the funding, we will be able to accelerate R&D to help developers and companies best leverage the power of Apache IoTDB."

Timecho will focus on providing solutions that will be more adaptable with high-end equipment, mass devices, computational-resource-limited platforms, and further scenarios.

About Timecho

Timecho combines "time" with "echo", which means to make time-series data become more valuable. Founded by the creators of Apache IoTDB, Timecho delivers IoT-native time-series solutions to satisfy the demands of mass data storage, fast reads and complex data analytics, enabling companies to leverage the value of time-series data with higher reliability at lower costs.

For more information, visitwww.timecho.com or its LinkedIn page.

Contact:Timecho Limited[emailprotected]

SOURCE Timecho Limited

Here is the original post:

Timecho, Founded by the Creators of Apache IoTDB, Raises Over US$10 Million First Fund for Open-Source Software R&D and Time-Series Database Solutions...

OpenReplay raises $4.7M for its open source tool to find the bugs in sites – TechCrunch

When users on websites and apps find they have problems, developers have various tools to record what they are doing in order to find out what went wrong. But these can involve cumbersome methods like screenshots, and back-and-forth emails with customers.

ContentSquareand Medalliaare products that primarily target marketers and product managers rather than developers, who need to know where apps are going wrong. Meanwhile, developers are using open source solutions, but these have their drawbacks.

OpenReplayis similarto nginx in the sense that the software is available for free for developers and self-hosted; this means data cant leave a companys infrastructure, but then extra services are paid for.

It has now raised $4.7 million in a seed funding round led by Runa Capital with the participation of Expa, 468 Capital, Rheingau Founders and co-founders of Tekion.

OpenReplayprovides developers with a session replay stack that helps them troubleshoot issues by making debugging visual, allowing developers to replay everything users do on their web app and potentially understand whereand whythey got stuck, says the company.

Mehdi Osman, CEO and founder of OpenReplay, said in a statement: Whilst other session replay tools are targeting marketers and product managers, we focused on those who actually build the product, developers. Enabling developers to self-host it on their premises and without involving any 3rd-party to handle their user data, is a game changer.

OpenReplaywill use the new funding to grow its community, accelerate deployment and improve user experience.

Konstantin Vinogradov, principal at Runa Capital, based in Palo Alto, California, which also invested in nginx, added: We actively invest in companies building open-source projects, especially, when the open-source model enables better products. OpenReplay is a great example of such an approach.

See the original post:

OpenReplay raises $4.7M for its open source tool to find the bugs in sites - TechCrunch