How Development Teams Can Orchestrate Their Workflow with Pipelines as Code – InfoQ.com

Key Takeaways

Looking at the state of software engineering, its clear that the industry underwent a level of transformation akin to a chameleons. What used to be mainstream is now almost extinct, replaced by completely different tools and technologies.

If I look at what I used to do ten years ago, I remember working heavily with centralised version control systems, being bound by the choice of operating system a workload was running upon, and in general a strong sense of demarcation between being a developer and working in infrastructure.

Things have obviously changed, however the single biggest disruptor in this field remains Git. Git changed everything it democratised good engineering practices like source control, and it allowed a wealth of tools to be built upon its foundation. DevOps obviously played a major part in this, being the glue collating together a number of new tools, approaches and technologies. In short, this bottom-up proliferation and the broad adoption of DevOps practices led to the industry organically moving to an as code approach.

Thats how Terraform (and similar) tools emerged, pushed by the tools ecosystem and by DevOps being broadly adopted and mainstream for most companies. Infrastructure as Code is now ubiquitous, and every cloud provider offers infrastructure deployment capabilities via code files and APIs which should be the default choice for any application that is not a Hello World sample.

Infrastructure as Code was just the beginning. Configuration as Code followed shortly after again becoming extremely commonplace and enabling organisations to scale their engineering capacity by a number of times. And in order to continuously increase the value development teams generate, Pipelines as Code was the natural consequence.

Pipelines as Code is the natural evolution of a key artefact engineering teams use every day. Think about it: you have Infrastructure as Code, Configuration as Code why not Pipelines as Code?

The concept is simple - rather than thinking about a pipeline just in terms of a CI/CD engine, you can expand it to being an orchestrator for your development platform, and all its artefacts are stored in code files.

That will provide you with versioning, team working capabilities, etc. while at the same time giving you the power to automate all of your processes. And the more you automate, your quality increases, your speed improves, and your resiliency goes up exponentially. Its a game changer for any development team.

Look at my blog publishing system - its all hosted on GitHub, and whenever I post something this is what happens:

Two pipelines (or workflows in GitHubs jargon) will run every time, one for publishing and one for other activities, under certain conditions. You might wonder why two, and why the CI workflow exists near the pages-build-deployment workflow. The first one is custom, the second one is out of the box for publishing. Lets take a look at the custom one:

This workflow automatically tweets on my behalf. It will run every time, unless a commit contains NO CI in the message. Its a code file, and it is stored in my repository. Should I ever move it from my account to another repository, it will keep working without issues.

All CI/CD orchestrators are going in this direction: Azure Pipelines, GitHub Actions, Jenkins, etc. The UI is no longer a focus, and Pipelines as Code allows for some very specific advantages for a developer.

Being just code means that your pipelines will benefit from all the tools already used in any engineering organisation. This includes version control, branching, pull requests, etc. Developers know how to deal with code, so pipelines become just another artefact stored in Git.

This also facilitates a number of situations where you must maintain traceability, auditability, etc. while still maintaining ease of access and familiarity. Once again, Pipelines as Code means that everything is stored with full history and access controls, while still maintaining ease of use and repeatability.

Eventually, portability. Yes, there will be different dialects of Pipelines as Code depending on the target platform, however, the concept remains the same across the board. I think about GitHub Actions and Azure Pipelines for example - both based on YAML with a different syntax and some peculiarities. It takes a day at most for a developer to get up to speed, a week tops to be comfortable with the differences. The productivity boost is unbelievable, given there is no more distinction between a build pipeline and a release pipeline. Everything is just a set of orchestrated tasks performed by the same engine.

There are some advanced features in each modern orchestrator. Templates are really common now, and a true life-saver. Youll define a pipeline once and you can re-use it across multiple automations and projects with minimal changes. Your template will contain all the logic, and the possible configurations - which you will invoke from your main pipeline. Lets take a look.

This would be a template, named template.yml in your repository:

This template accepts an input array, and it will relay the individual items making up the array one by one using a command-line task. Its a very simple logic, however within a pipeline you can see you can already use complex constructs like for loops (via the each keyword) and it will allow you to dynamically compose as many tasks as the input arrays items.

Now, if you invoke it from another pipeline, all you have to do is this:

The output of this main pipeline is as follows:

Four command-line tasks generated on-demand, printing out the values. All orchestrated on the fly.

Another feature I really like is the Matrix in Azure Pipelines, for example:

This snippet will run the tasks specified in the steps section across three different pipelines, each running in a different agent with different operating systems. This is all it takes.

Needless to say, its not all plain sailing and straightforward there is a learning curve. Unsurprisingly, the biggest hurdle to go past is the lack of a UI. For at least a decade our build orchestrators relied on UIs to make the process simpler and easier to digest as the developers lacked full control over their process. As an industry we settled on the expectation that the UI had to be there to make things easier to digest.

Then the as code movement came along, and started breaching through. Infrastructure as Code was the first foray, then everything else followed through. Fast forward ten years, we are now able to deal with the fact that a UIs no longer have the most features and options, instead becoming just a gateway to a build orchestrator to learn the main functionalities before moving to the as code implementation.

The other important change factor is the fact that now everything runs in a pipeline, with potentially no distinction between build or release. Its up to the developer to define these boundaries, and migrating can require some work as there will be no 1:1 mapping for everything. It is however a fairly lightweight job, so not so big of an obstacle.

After working with many platforms you will realise there are patterns and reusable approaches, however the main lesson learned still is about getting into the habit of implementing Pipelines as Code as early as possible. Creating the build definition should be the first thing an engineer does, because it will evolve with the application code and it will provide a seamless experience once used with the DevOps platform.

A typical example is this: having pipeline definitions embedded in your code repositories means that your repositories will immediately become fully granular and independent, as they will contain not just the source code for the application, but also the build definition required in order to compile and deploy such application, making it a movable artefact across a DevOps platform. Microservices development becomes way easier. Testing gets simpler. Automating mundane tasks can yield so much additional value to the team, given any engineer can focus on solving actual problems rather than repeating the same steps all the time. Pipelines as Code does wonders.

Moving to Pipelines as Code doesnt happen overnight, but can open so many doors and paths for your development team. If you are just getting started, do one thing - pick up any of your build and release processes and start replicating it in your code files. Its as simple as that. The more you automate these processes, the more you will start implementing them as the default option and you will save a huge amount of time which is otherwise wasted on repetitive tasks.

Doing so will naturally guide you towards automating the steps currently holding you back, all with the benefit of the development experience engineers are used to. Changes become easier to track, merging is simple and coupled with a peer review process it will be accessible to every developer.

Here is the original post:

How Development Teams Can Orchestrate Their Workflow with Pipelines as Code - InfoQ.com

Now Comes The Hard Part, AMD: Software – The Next Platform

From the moment the first rumors surfaced that AMD was thinking about acquiring FPGA maker Xilinx, we thought this deal was as much about software as it was about hardware.

We like that strange quantum state between hardware and software where the programmable gates in FPGAs, but that was not as important. Access to a whole set of new embedded customers was pretty important, too. But the Xilinx deal was really about the software, and the skills that Xilinx has built up over the decades crafting very precise dataflows and algorithms to solve problems where latency and locality matter.

After the Financial Analyst Day presentations last month, we have been mulling the one by Victor Peng, formerly chief executive officer at Xilinx and now president of the Adaptive and Embedded Computing Group at AMD.

This group mixes together embedded CPUs and GPUs from AMD with the Xilinx FPGAs and has over 6,000 customers. It brought in a combined $3.2 billion in 2021 and is on track to grow by 22 percent or so this year to reach $3.9 billion or so; importantly Xilinx had total addressable market of about $33 billion for 2025, but with the combination of AMD and Xilinx, the TAM has expanded to $105 billion for AECG. Of that, $13 billion is from the datacenter market that Xilinx has been trying to cater to, $33 billion is from embedded systems of various kinds (factories, weapons, and such), $27 billion is from the automotive sector (Lidar, Radar, cameras, automated parking, the list goes on and on), and $32 billion is from the communications sector (with 5G base stations being the important workload). This is roughly a third of the $304 billion TAM for 2025 of the new and improved AMD, by the way. (You can see how this TAM has exploded in the past five years here. Its remarkable, and hence we remarked upon it in great detail.)

But a TAM is not a revenue stream, just a giant glacier off in the distance that can be melted with brilliance to make one.

Central to the strategy is AMDs pursuit of what Peng called pervasive AI, and that means using a mix of CPUs, GPUs, and FPGAs to address this exploding market. What it also means is leveraging the work that AMD has done designing exascale systems in conjunction with Hewlett Packard Enterprise and some of the major HPC centers of the world to continue to flesh out an HPC stack. AMD will need both if it hopes to compete with Nvidia and to keep Intel at bay. CUDA is a formidable platform, and oneAPI could be if Intel keeps at it.

When I was with Xilinx, I never said that adaptive computing was the end all, be all of computing, Peng explained in his keynote address. A CPU is going to always be driving a lot of the workloads, as will GPUs. But Ive always said that in a world of change, adaptability is really an incredibly valuable attribute. Change is happening everywhere you hear about it, the architecture of a datacenter is changing. The platform of cars is totally changing. Industrial is changing. There is change everywhere. And if hardware is adaptable, then that means not only can you change it after its been manufactured, but you can change it even when its deployed in the field.

Well, the same can be said of software, which follows hardware of course. Even though Peng didnt say that. People were messing around with SmallTalk back in the late 1980s and early 1990s after it had been maturing for two decades because of the object oriented nature of the programming, but the market chose what we would argue was an inferior Java only a few years later because of its absolute portability thanks to the Java Virtual Machine. Companies not only want to have the options of lots of different hardware, tuned specifically for situations and workloads, but they want the ability to have code be portable across those scenarios.

This is why Nvidia needs a CPU that can run CUDA (we know how weird that sounds), and why Intel is creating oneAPI and anointing Data Parallel C++ with SYCL as its Esperanto across CPUs, GPUs, FPGAs, NNPs, and whatever else it comes up with.

This is also why AMD needed Xilinx. AMD has plenty of engineers well, north of 16,000 of them now and many of them are writing software. But as Jensen Huang, co-founder and chief executive officer of Nvidia explained to us last November, three quarters of Nvidias 22,500 employees are writing software. And it shows in the breadth and depth of the development tools, algorithms, frameworks, middleware available for CUDA and how that variant of GPU acceleration has become the de facto standard for thousands of applications. If AMD s going to have the algorithmic and industry expertise to port applications to a combined ROCm and Vitis stack, and do it in less time than Nvidia took, it needed to buy that industry expertise.

That is why Xilinx cost AMD $49 billion. And it is also why AMD is going to have to invest much more heavily in software developers than it has in the past, and why the Heterogeneous Interface for Portability, or HIP, API, which is a CUDA-like API that allows for runtimes to target a variety of CPUs as well as Nvidia and AMD GPUs, is such a key component of ROCm. It gets AMD going a lot faster on taking on CUDA applications for its GPU hardware.

But in the long run, AMD needs to have a complete stack of its own covering all of the AI use cases across its many devices:

That stack has been evolving, and Peng will be steering it from here on our with the help of some of those HPC centers that have tapped AMD CPUs and GPUs as their compute engines in pre-exascale and exascale class supercomputers.

Peng didnt talk about HPC simulation and modeling in his presentation at all and only lightly touched on the idea that AMD would craft an AI training stack atop of the ROCm software that was created for HPC. Which makes sense. But he did show how the AI inference stack at AMD would evolve, and with this we can draw some parallels across HPC, AI training, and AI inference.

Here is what the AI inference software stack looks like for CPUs, GPUs, and FPGAs today at AMD:

With the first iteration of its unified AI inference software which Peng called the Unified AI Stack 1.0 the software teams at AMD and the former Xilinx are going to create a unified inference front end that can span the ML graph compilers on the three different sets of compute engines as well as the popular AI frameworks, and then compile code down to those devices individually.

But in the long run, with the Unified AI Stack 2.0, the ML graph compilers are unified and a common set of libraries span all of these devices; moreover, some of the AI Engine DSP blocks that are hard-coded into Versal FPGAs will be moved to CPUs and the Zen Studio AOCC and Vitis AI Engine compilers will be mashed up to create runtimes for Windows and Linux operating systems for APUs that add AI Engines for inference to Epyc and Ryzen CPUs.

And that, in terms of the software, is the easy part. Having created a unified AI inferencing stack, AMD has to create a unified HPC and AI training stack atop ROCm, which again is not that big of a deal, and then the hard work starts. That is getting the close to 1,000 key pieces of open source and closed source applications that run on CPUs and GPUs ported so they can run on any combination of hardware that AMD can bring to bear and probably the hardware of its competitors, too.

This is the only way to beat Nvidia and to keep Intel off balance.

Continue reading here:

Now Comes The Hard Part, AMD: Software - The Next Platform

Fortress Information Security Sponsors Open Web Application Security Project To Work on Industry-Wide Software Bill of Materials Standards -…

Orlando, FL, July 6, 2022 Fortress Information Security, the nations leading cybersecurity provider for critical infrastructure organizations with digitized assets, today joined the Open Web Application Security Project (OWASP) as a silver sponsor. Fortress has allocated a portion of that sponsorship to support the CycloneDX project focused on promoting a lightweight Software Bill of Materials (SBOM) standard for application security and supply chain component analysis.

OWASP is a nonprofit foundation that works to improve software security by making application security risks visible. OWASP activities include community-led open source software projects, over 250+ local chapters worldwide, tens of thousands of members, and industry-leading educational and training conferences.

OWASP and the CycloneDX project are critical to making universal SBOM principles and standards a reality, said Betsy Jones, chief operating officer of Fortress Information Security. Bringing software developers and cybersecurity professionals together openly and collaboratively will foster the development of trusted SBOM solutions.

Joined by Tony Turner, Fortress vice president of research and development and an OWASP chapter and project leader for over 10 years, Fortress utilizes multiple OWASP projects such as CycloneDX, SCVS, OWASP Risk Ranking methodology, and many others to secure critical infrastructure.

OWASP is an open community dedicated to enabling organizations to conceive, develop, acquire, operate, and maintain applications that can be trusted. All projects, tools, documents, forums, and chapters are free and open to anyone interested in improving application security.

About Fortress Information SecurityFortress Information Security secures critical industries from cybersecurity and operational threats stemming from vendors, assets, and software in their supply chains. Fortress is the only end-to-end platform that connects intelligence surrounding vendors, information technology and operational technology assets, and software through a holistic, fit-for-purpose approach. Fortress has also partnered with its customers and suppliers to form the Asset-to-Vendor (A2V) network, which facilitates the secure and seamless exchange of asset information and security intelligence, enabling collaborative workflows to better understand and remediate potential issues. Fortress serves critical industries such as energy, government, aerospace & defense, critical manufacturing, industrial automation, automotive, and healthcare.

About OWASPAs the worlds largest non-profit organization concerned with software security, OWASP: supports the building of impactful projects; develops & nurtures communities through events and chapter meetings worldwide; and provides educational publications & resources to enable developers to write better software and security professionals to make the worlds software more secure.

See the article here:

Fortress Information Security Sponsors Open Web Application Security Project To Work on Industry-Wide Software Bill of Materials Standards -...

This Is the Code the FBI Used to Wiretap the World – VICE

Hacking. Disinformation. Surveillance. CYBER is Motherboard's podcast and reporting on the dark underbelly of the internet.

The FBI operation in which the agency intercepted messages from thousands of encrypted phones around the world was powered by cobbled together code. Motherboard has obtained that code and is now publishing sections of it that show how the FBI was able to create its honeypot. The code shows that the messages were secretly duplicated and sent to a ghost contact that was hidden from the users contact lists. This ghost user, in a way, was the FBI and its law enforcement partners, reading over the shoulder of organized criminals as they talked to each other.

Last year, the FBI and its international partners announced Operation Trojan Shield, in which the FBI secretly ran an encrypted phone company called Anom for years and used it to hoover up tens of millions of messages from Anom users. Anom was marketed to criminals, and ended up in the hands of over 300 criminal syndicates worldwide. The landmark operation has led to more than 1,000 arrests including alleged top tier drug traffickers and massive seizures of weapons, cash, narcotics, and luxury cars.

Motherboard has obtained this underlying code of the Anom app and is now publishing sections of it due to the public interest in understanding how law enforcement agencies are tackling the so-called Going Dark problem, where criminals use encryption to keep their communications out of the hands of the authorities. The code provides greater insight into the hurried nature of its development, the freely available online tools that Anoms developers copied for their own purposes, and how the relevant section of code copied the messages as part of one of the largest law enforcement operations ever.

Do you know anything else about Anom? Were you a user? Did you work for the company? Did you work on the investigation? Are you defending an alleged Anom user? We'd love to hear from you. Using a non-work phone or computer, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, or emailjoseph.cox@vice.com.

The key part of the Anom app is a section called bot.

The app uses XMPP to communicate, a long-established protocol for sending instant messages. On top of that, Anom wrapped messages in a layer of encryption. XMPP works by having each contact use a handle that in some way looks like an email address. For Anom, these included an XMPP account for the customer support channel that Anom users could contact. Another of these was bot.

Unlike the support channel, bot hid itself from Anom users contact lists and operated in the background, according to the code and to photos of active Anom devices obtained by Motherboard. In practice the app scrolled through the users list of contacts, and when it came across the bot account, the app filtered that out and removed it from view.

That finding is corroborated by law enforcement files Motherboard obtained which say that bot was a hidden or ghost contact that made copies of Anom users messages.

Authorities have previously floated the idea of using a ghost contact to penetrate encrypted communications. In a November 2018 piece published on Lawfare, Ian Levy and Crispin Robinson, two senior officials from UK intelligence agency GCHQ, wrote that Its relatively easy for a service provider to silently add a law enforcement participant to a group chat or call, and You end up with everything still being end-to-end encrypted, but theres an extra end on this particular communication.

The code also shows that in the section that handles sending messages, the app attached location information to any message that is sent to bot. On top of that, the AndroidManifest.xml file in the app, which shows what permissions an app accesses, includes the permission for ACCESS_FINE_LOCATION. This confirms what Motherboard previously reported after reviewing thousands of pages of police files in an Anom-related investigation. Many of the intercepted Anom messages in those documents included the precise GPS location of the device at the time the message was sent.

In some cases, police officers reported that the Anom system failed to record those GPS locations correctly, but that authorities believe the coordinates are generally reliable as they have in some cases been matched with other information such as photos, according to those police files.

A lot of the code for handling communications was apparently copied from an open source messaging app.

The code itself is messy, with large chunks commented out and the app repeatedly logging debug messages to the phone itself.

Cooper Quintin, a senior staff technologist at activist organization the Electronic Frontier Foundation (EFF), didnt think it was unusual for developers to use other modules of code found online. But he did find it bonkers that the FBI used ordinary developers for this law enforcement operation.

This would be like if Raytheon hired the fireworks company down the street to make missile primers, but didnt tell them they were making missile primers, he said in a phone call. I would typically assume the FBI would want to keep tighter control on what theyre working on, such as working with inhouse computer engineers who had security clearance and not bringing in people who are unknowingly taking down criminal organizations, he added. (One reason for the use of third-party developers was that Anom already existed as a company in its own right, with coders hired by the companys creator who worked on an early version of the app, before the FBI became secretly involved in Anoms management).

Recently courts in Europe and Australia have seen the next step of the Anom operation: the prosecution of these alleged criminals with Anom messages making up much of the evidence against them. Defense lawyers in Australia have started legal requests to obtain the code of the Anom app itself, arguing that access to the code is important to determine that the messages being presented in court by the prosecution are accurate. The Australian Federal Police (AFP) has refused to release the code.

Anybody who has been charged with an offence arising from messages that are alleged to have been made on the so called Anom Platform has a clear and obvious interest in understanding how the device worked, how anyone was able to access these messages and most importantly whether the original accessing and subsequent dissemination of these messages to Australian authorities was lawful, Jennifer Stefanac, an Australian solicitor who is defending some of the people arrested as part of Operation Ironside, the Australian authorities side of the Anom operation, told Motherboard in an email.

A second lawyer handling Anom related cases said they didn't think the Anom code would be of much relevance to defendants cases. A third said they saw why defendants may seek access to the code, but that they believed it shouldnt be publicly available.

When asked for comment, the San Diego FBI told Motherboard in a statement that We appreciate the opportunity to provide feedback on potentially publishing portions of the Anom source code. We have significant concerns that releasing the entire source code would result in a number of situations not in the public interest like the exposure of sources and methods, as well as providing a playbook for others, to include criminal elements, to duplicate the application without the substantial time and resource investment necessary to create such an application. We believe producing snippets of the code could produce similar results.

Motherboard is not publishing the full code of Anom. Motherboard believes the code contains identifying information on who worked on the app. Most of the people who worked on the Anom app were not aware it was secretly an FBI tool for surveilling organized crime, and exposing their identities could put them at serious risk. Motherboard will not be releasing the app publicly or distributing it further.

Motherboard previously obtained one of the Anom phones from the secondary market after the law enforcement operation was announced. In that case, the phone had a locked bootloader, meaning it was more difficult to extract files from the device. For this new analysis of the code, a source provided a copy of the Anom APK as a standalone file which Motherboard then decompiled. Motherboard granted multiple sources in this piece anonymity to protect them from retaliation.

Decompiling an app is an everyday process used by reverse engineers to access the code used to construct an app. It can be used to fix problems with the software, find vulnerabilities, or generally to research how an app was put together. Two reverse engineering experts corroborated and elaborated upon Motherboards own analysis of the app.

Operation Trojan Shield has been widely successful. On top of the wave of arrests, authorities were also able to intervene using the messages and stop multiple planned murders. In June to mark the one year anniversary of the operations announcement, the AFP revealed it has shifted some of its focus to investigating thousands of people suspected of being linked to Italian organized crime in Australia and that it is working with international partners.

Subscribe to our cybersecurity podcast,CYBER. Subscribe toour new Twitch channel.

View original post here:

This Is the Code the FBI Used to Wiretap the World - VICE

Aurora Labs raises $63 Million in Series C Financing to bring AI to the Software-Defined Vehicle – Business Wire

TEL AVIV, Israel--(BUSINESS WIRE)--Aurora Labs, founded in 2016 by Zohar Fox (CEO) and Ori Lederman (COO), announced today that it has secured $63 million through a Series C financing round led by Moore Strategic Ventures (MSV). Also participating in the round was existing investor Porsche Automobil Holding SE (Porsche SE), majority owner of VW Group and Colmobil Corp, Israels leading automotive importer and distributor. Colmobil Corp is led by the Harlap family who were early investors in Mobileye, SolarEdge and Via, amongst others. This round brings the total investment in Aurora Labs to approximately $100 million. Aurora Labs holds 90 patents and has 15 customer projects globally.

Aurora Labs AI-based Vehicle Software Intelligence has reinvented how automotive companies, Tier-1 suppliers, silicon vendors and enterprises develop, certify and diagnose software, and conduct over-the-air updates. The companys solutions are being used by global automotive and device manufacturers to continuously collect actionable data and obtain a deep understanding of line-of-code software behavior. This level of understanding helps the software development teams streamline the processes of development, testing, integration, WP.29 compliance, continuous certification, and on-the-road, zero-downtime, over-the-air (OTA) updating. The solution also keeps software safe and secure from faults and cybersecurity attacks, while allowing manufacturers to continuously add new features and functions extending the life of the device and enhancing user experiences.

Insight into automotive software behavior is crucial as more lines of code from a growing list of entities Tier 1s, open-source, and automakers themselves make the software-defined vehicle a reality. Aurora Labs AI-based Vehicle Software Intelligence offers significant economic benefits to the auto industry with a clear cost-effective value proposition, saving up to 98% of hardware and data transmission costs for software updates, and up to 30% in software engineering hours saving manufacturers billions of dollars on their bill of materials and data communications costs and enabling recurring revenue streams.

Safe, quick, and reliable over-the-air update capability is fast becoming a minimum requirement to effectively compete in the increasingly software-focused auto landscape, says James McIntyre, Senior Managing Director and COO of MSV. Aurora Labs provides OEMs with the tools they need to compete at the highest level, offering their customers a far better user experience than they are capable of delivering today.

We are doubling down on our investment in Aurora Labs because of the importance of its AI-based technology to the automotive sector. The software provides developers and automotive OEMs with actionable insights from the development phase throughout the lifecycle of the car. We are convinced that the use of this technology provides significant benefits to OEMs and customers alike and will be a key enabler for software-defined and connected vehicles, said Lutz Meschke, board member responsible for investment management at Porsche SE.

The continued commitment from Porsche SE and the investment from Moore Strategic Ventures and Colmobil Corp, all of which have proven success records of investing in the automotive sector, is evidence of the commercial traction we have in Europe and Asia. The amount of software being developed for, and deployed in, the vehicle is astronomical. For the industry to move forward and realize software-defined vehicles - sophisticated AI solutions are needed to enable Continuous Everything - Continuous Integration, Deployment (CI/CD), Testing, Certification and Updates. Aurora Labs solutions will save automotive companies time and money and will ultimately save lives as vehicles become electric and more autonomous, said Zohar Fox, CEO, Aurora Labs.

About Aurora Labs

Aurora Labs is pioneering the use of AI and Software Intelligence to solve the challenge of automotive software development.

Aurora Labs brings AI-based Vehicle Software Intelligence to the entire lifecycle of a vehicle from software development to testing, integration, quality control, continuous certification and on-the-road over-the-air software updates. Aurora Labs focuses on the embedded systems that are key to the development of the software-defined vehicle and enables automotive manufacturers to more efficiently manage software costs and the resources required to develop and manage new vehicle features and mobility services.

The Companys products have been adopted on customer platforms around the world, and with a commitment to conform and meet ISO-26262/ASIL-D and ASPICE-L2, will be in vehicles in coming car models. Aurora Labs, founded in 2016, has raised approximately $100m and has been granted 90 patents. The Company is headquartered in Tel Aviv, Israel, with offices in Germany, North Macedonia, the US, and Japan.

http://www.auroralabs.com

Read more:

Aurora Labs raises $63 Million in Series C Financing to bring AI to the Software-Defined Vehicle - Business Wire

Microsoft Edge gets hit with the same serious security bug that plagued Chrome – Digital Trends

Microsoft just released an Edge browser update that patches a dangerous flaw that could allow a cleverly designed attack to execute arbitrary code. While every security update should be installed promptly, this one is a bit more urgent because the attack is in the wild already, meaning that hackers are already taking advantage of this vulnerability to breach security.

Designated CVE-2022-2294, this vulnerability was actually a flaw with the Chromium project, the open-source code that Googles Chrome browser is built upon. Microsoft uses the same base code for the Edge browser, meaning bugs that affect one often plague the other. Google patched the same bug recently and has been keeping quiet about details of the attack to allow others to make similar fixes, since Chromium is quite a popular codebase.

Microsoft recommends updating your browser as soon as possible, as theres a chance this bug is already impacting PCs. Without the update, hackers could launch attacks that give them full control over your computer, showcasing how severe this security risk is.

To update Microsoft Edge, click the three horizontal dots at the upper-right to open a menu of options, choose Help and feedback, then About Microsoft Edge. In most cases, the update should have already been downloaded or could begin downloading. If not, start the update manually.

When the download is complete, the Edge browser needs to be restarted to complete the installation. Click the Restartbutton or close and reopen Edge to get a fresh start. At this point, its safe to browse again without concern about this particular bug.

Microsoft recommends choosing automatic updates for the Edge browser, which is possible from the same page. If there is an option to Download and install updates automatically, it would be wise to enable it to get security updates as quickly as possible. Download over metered connectionsmight also be seen and implies using a cellular connection. Since updates can sometimes be large, this option might be best left off unless using an unlimited plan.

Go here to read the rest:

Microsoft Edge gets hit with the same serious security bug that plagued Chrome - Digital Trends

GitHub Makes Copilot Available to the Public for $10/month, Free for Students and Open Source Project Maintainers – WP Tavern

GitHub has announced that Copilot, its new AI pair programming assistant, is now available to developers for $10/month or $100/year. Verified students and maintainers of open source projects will have free access to Copilot. The assistant is available as an extension for popular code editors, including Neovim, JetBrains IDEs, Visual Studio, and Visual Studio Code.

Copilot was trained on billions of lines of public code in order to offer real-time code suggestions inside the editor. GitHub claims it is capable of suggesting complete methods, boilerplate code, whole unit tests, and complex algorithms.

With GitHub Copilot, for the first time in the history of software, AI can be broadly harnessed by developers to write and complete code, GitHub CEO Thomas Dohmke said. Just like the rise of compilers and open source, we believe AI-assisted coding will fundamentally change the nature of software development, giving developers a new tool to write code easier and faster so they can be happier in their lives.

Despite its many claims to improve developer efficiency, Copilot is still a controversial tool. Opponents object to the tools creators training the AI on open source code hosted on GitHub, generating code without attribution, and then charging users monthly to use Copilot. It has also been criticized for producing insecure code and copying large chunks of code verbatim.

Evan after 12 months in technical preview, Copilot remains generally polarizing at its public launch. Developers either seem to be impressed by its capabilities or offended by its ethical ambiguities. GitHub had more than 1.2 million developers in its technical preview and reports that those who started using Copilot quickly found it an indispensable part of their daily workflows.

In files where its enabled, nearly 40% of code is being written by GitHub Copilot in popular coding languages, like Pythonand we expect that to increase, Dohmke said. Thats creating more time and space for developers to focus on solving bigger problems and building even better software.

See the original post:

GitHub Makes Copilot Available to the Public for $10/month, Free for Students and Open Source Project Maintainers - WP Tavern

Visual Studio adds ability to edit code in All-in-One Search – The Register

Microsoft has added the ability to edit code while in Visual Studio's All-In-One Search user interface.

The feature is included in Visual Studio 2022 17.3 Preview 2 and follows changes to search functionality in the development suite. At the start of the year, Microsoft introduced indexed Find in Files to speed up the already rapid searching (compared to Visual Studio 2019 at any rate).

The indexed Find in Files fired up a ServiceHub.IndexingService.exe process on solution load or folder open which scraped through the files to construct an index. Worries that the indexer would slug performance like certain other Microsoft indexing services were alleviated somewhat by the use of Below Normal operating system priority.

In April, with Visual Studio 17.2 Preview 3, a new All-In-One search experience turned up, which merged both the existing Visual Studio Search and Go To functionality into an unhelpful pop-up window in the IDE.

It's fair to say the idea was not universally well received as users requested that the Visual Studio team "stop wasting [their] time with searches," remarked that they "don't understand what is wrong with the current search," and wondered "if the VS team works on anything other than adding new searches."

Slightly ominously, one user said that they "would prefer if Visual Studio just follows how VS Code has implemented its search."

Visual Studio Code is the open-source elephant in the room, considerably less weighed down by the requirements of its sibling's legacy. VS Code regularly tops the charts of favored developer tools, with the full-fat Visual Studio trailing behind. As one user noted last month: "I will open VS only for the good old Winforms designer"

So what to do? Keep on adding stuff to Search, of course! The new feature is intended to allow developers to edit code directly in the search window via the familiar editor experience (think IntelliSense and Quick Actions). One can configure the code/results arrangement to be vertical or horizontal or simply turn off the code preview altogether.

The new search experience remains a preview and must be enabled via Tools > Options > Environment > Preview Features. Having taken the functionality for a spin, we can confirm it works at described and was positively handy when it came to dealing with massive solutions. However, we doubt it will do much to stop developers jumping ship for something a bit less bloated when presented with the opportunity.

More:

Visual Studio adds ability to edit code in All-in-One Search - The Register

Boycott 7-Zip Because It’s Not On Github; Seriously? – PC Perspective

There is a campaign on Reddit that is gaining some traction calling for a boycott of the software because it is not a true Scotsman truly Open Source. The objection raised is that 7-Zip is not present on Github, Gitlab, nor any public code hosting and therefore is not actually Open Source. The fact that those sites do not appear at all in the Open Source Initiatives official definition of open source software doesnt seem to dissuade those calling for the boycott whatsoever.

Indeed you can find the source code for 7-Zip on Sourceforge, an arguably much easier site to deal with than the Gits, and it is indeed licensed under the GNU Lesser GPL. That indeed would be considered as qualifying as open source software, with the use of the LGPL likely being because 7-Zip includes the unRAR library to be able to unzip RAR files and that requires a license from RARLAB.

Their evidence of the lack of 7-Zips openness is based on comments from a 12 year old Reddit thread and the fact that sometimes there are security vulnerabilities in the software. As The Register points out, the existence of the Nanazip fork of 7-Zip and the fact that 7-Zip has no problems with it is much stronger that the software is indeed open source.

You can find a link to the thread in the article, if you want to participate in one of the internets current pointless arguments.

Read the original post:

Boycott 7-Zip Because It's Not On Github; Seriously? - PC Perspective

Academic, Industry Leaders Form OpenFold AI Research Consortium to Develop Open Source Software Tools To Understand Biological Systems and Discover…

DAVIS, Calif.--(BUSINESS WIRE)--A set of leading academic and industry partners are announcing the formation of OpenFold, a non-profit artificial intelligence (AI) research consortium of organizations whose goal is to develop free and open source software tools for biology and drug discovery. OpenFold is a project of the Open Molecular Software Foundation (OMSF), a non-profit organization advancing molecular sciences by building communities for open source research software development.

OpenFolds founding members are the Columbia University Laboratory of Mohammed AlQuraishi, Ph.D., Arzeda, Cyrus Biotechnology, Genentechs Prescient Design, and Outpace Bio. The consortium, whose membership is open to other organizations, is hosted by OMSF and supported by Amazon Web Services (AWS) as part of the AWS Open Data Sponsorship Program. OMSF also hosts OpenFreeEnergy and OpenForceField.

Brian Weitzner, Ph.D. Associate Director of Computational and Structural Biology at Outpace and a co-founder of OpenFold said, In biology, structure and function are inextricably linked, so a deep understanding of structure is required to elucidate molecular mechanisms and engineer biological systems. We believe that open collaboration and access to powerful AI-powered structural biology tools will transform biotechnology and biosciences by empowering researchers and educators spanning life science companies, tech companies and academia with free access to use and extend these tools to accelerate discovery and develop life-changing technologies.

The first major research area for the consortium is to create state-of-the-art AI-based protein modeling tools which can predict molecular structures with atomic accuracy. The OpenFold consortium is modeled after pre-competitive technology industry open source consortia such as Linux and OpenAI.

First consortium-released AI model to predict protein structure yielding impressive results

The OpenFold founders also officially announced today the full release of its first protein structure prediction AI model developed in Dr. AlQuraishis laboratory, first publicly acknowledged on Twitter on June 22, 2022. The model is based on groundbreaking work at Google DeepMind and the University of Washingtons Institute for Protein Design. The software is available under a free and open source license from The Apache Software Foundation at https://github.com/aqlaboratory/openfold. Training data can be found on the Registry of Open Data on AWS. A formal preprint and publication will be forthcoming.

Yih-En Andrew Ban, Ph.D., VP Computing at Arzeda and co-founder of OpenFold said, This first OpenFold AI model is already producing highly accurate predictions of protein crystal structures as benchmarked on the Continuous Automated Model EvaluatiOn (CAMEO), and has yielded on-average higher accuracy and faster runtimes than DeepMinds AlphaFold2. An example output from OpenFold, with comparison to experimental data, is included in the figure.

CAMEO is a project developed by the protein structure prediction scientific community to evaluate the accuracy and reliability of predictions.

Lucas Nivon, Ph.D., CEO at Cyrus and co-founder of OpenFold, said, The first release of the OpenFold software includes not just inference code and model parameters but full training code, a complete package that has not been released by another entity in the space. It will allow a full set of derivative models to be trained for specialized uses in drug discovery of biologics, small molecules, and other modalities.

Researchers around the world will be able to use, improve, and contribute to what the consortium founders describe as their predictive molecular microscope. Current and future work will extend these derivative models to integrate with other software in the field and to be more useful for protein design and biologics drug discovery specifically.

Richard Bonneau, Ph.D., Executive Director at Genentechs Prescient Design said, OpenFold is many things to us, a code, a forum, a set of great minds to discuss our favorite topics! It has been a wonderful experience so far, and we are really excited to build out the next stages of the roadmap!

Multiple other corporate and non-profit organizations are currently joining the OpenFold consortium as full members, and the founders invite biotech, pharma, technology and other research organizations to join. The consortium is currently evaluating proposals for new AI protein projects from academic groups around the world.

About OpenFold

OpenFold is a non-profit artificial intelligence (AI) research consortium of academic and industry partners whose goal is to develop free and open-source software tools for biology and drug discovery, hosted as a project of the Open Molecular Software Foundation. For more information please visit: OpenFold Consortium

View post:

Academic, Industry Leaders Form OpenFold AI Research Consortium to Develop Open Source Software Tools To Understand Biological Systems and Discover...