Programming languages: One in four Go developers are already using this ‘most requested’ feature – ZDNet

Getty Images/Nitat Termmee

About a quarter of developers using Google's open source Go programming language have started using "generics" a highly demanded feature that was missing until this year and while developers worry about supply chain security, they're ill equipped to respond to vulnerabilities.

Go gained generics in Go version 1.18 released in March, when it was described as 'Go's most often requested feature' so it's not surprising it has since been quickly adopted. According to the June 2022 Go developer survey, over a quarter of the 5,752 respondents have started using generics in their Go code. Go is the 16th most popular programming language, according to developer analyst, Redmonk's January 2022 rankings.

Todd Kulesza, a UX designer on Go, noted in a blogpost that addition of generics was welcome, but noted that about a third of developers are running into some limitations of its initial implementation.

Generics, or support for type parameters, brings more type safety to Go and improves productivity and performance. Some 86% of respondents were aware generics shipped in Go 1.18 and 26% had used it, with 14% already using generics in production or released code. However, 54% said they didn't need to use generics today, while 12% had used generics but not in production code.

Other obstacles to using generics was that linters didn't support them while 26% reported using a pre-1.8 release or being on a Linux distribution that didn't provide Go 1.18 packages.

But 10% reported that using generics had resulted in less code duplication.

Kulesza says worries over vulnerabilities in Go dependencies are a "top security concern". Only 12% of developers were using tools like fuzz testing on Go code. A sizable 65% of developers were using static analysis tools but only 35% of them use it to find vulnerabilities.

The survey found that 84% use security tooling during CI/CD time, but this was often too late in the development cycle as developers want to be notified about a vulnerability in a dependency before building up on it.

The Go team this week also launched new vulnerability management tools and a vulnerability database for Go based on data from Go package maintainers. Go 1.18 was also the first version to feature fuzzing in its standard toolchain. The Go fuzz tests are supported by Google's open source fuzzing tool OSS-Fuzz.

These are all activities the NSA recently recommended for developers to do to improve software supply chain security and secure coding practices, which came into focus after the 2020 SolarWinds breach.

The Go survey highlights some problems developers face.

Fifty-seven percent of developers reported having difficulties evaluating the security of third-party libraries. Kulesza notes GitHub's dependabot or the Go team's govulncheck can assist here. In fact, Dependabot was by far the most common way respondents learned of a vulnerability in a dependency.

However, only 12% reported conducted an investigation to see whether and how their software was impacted by a vulnerability. It found 70% of those who did investigate a vulnerability's impact found the process of impact analysis the most challenging. They also reported it was often unplanned and unrewarded work.

The most popular code editor for Go developers was Microsoft's cross-platform Visual Studio Code (VS Code), which is used by 45% of respondents, followed by GoLand/IntelliJ (34%), Vim/Neovim (14%), and Emacs (3%).

Some 59% of respondents developed on a Linux machine, followed by 52% on macOS, and 23% on Windows, with 13% using the Windows Subsystem for Linux. By far the most common platform to target was Linux at 93%, followed by Windows at 16%, macOS at 13%, and IoT devices at 5%.

Continue reading here:
Programming languages: One in four Go developers are already using this 'most requested' feature - ZDNet

Changing the horizon through open source cooperation – The Register

Webinar There is nothing quite like cooperation in any walk of life to make us feel better about the human species and foster progress. In our early hunter-gatherer days it was working together that ensured mammoth-hunting success and the very existence of the open source community is another example of an inspiring force for good.

Intel has been a part of this community since the pioneer days of open source activity and has gone on to develop new approaches offering system choice and flexibility to infrastructure developers and engineers, including its work through the Linux Foundation's Open Programmable Infrastructure Project (OPI) involving leading semiconductor and systems makers.

OPI is foundational, flexible, and super-smart. It can aid in building frameworks, support any hardware, create complex open application ecosystems, and integrate existing open source elements, not least building new APIs for IPU and DPU-driven functions.

Join Rob Sherwood, NEX Cloud Networking Group CTO at Intel and Situation Publishing's Nicole Hemsoth on 13th September at 1pm BST, 8am EDT and 5am PDT to hear everything there is to know about the present and the future of open source infrastructure programming. The dynamic duo will take a forensic look at the Infrastructure Programmer Developer Kit (IPDK), and what Intel sees coming down the line for the OPI.

Register for this webinar here and we will send you a reminder.

Sponsored by Intel

View original post here:
Changing the horizon through open source cooperation - The Register

Startup Reveals A Revolutionary Open Source Image-to-Text Generator – Open Source For You

Rabat Although AI picture generators are no longer groundbreaking, a London-based business is making waves online for introducing a text-to-image AI generator that could revolutionise the industry. According to convergent accounts, the Stable Diffusion tool depends on open source image synthesis and machine learning to feed algorithms old data and allow them to generate new input without any programming.

Stable Diffusion, a deep learning technology, would enable users to virtually produce imaginative graphics utilising two-word key phrases or more. Before Stable Diffusion, the technologys fundamental premise was widely accepted. But now that the technology is being made available online as an open-source tool, anyone can use it.

The gadget, which only made its debut two weeks ago, is quickly gaining popularityeven more so than its forerunnerswith some analysts asserting that it brings implications as big as the invention of the camera. The breakthrough in AI image synthesis that preceded Stable Diffusion appeared in 2014.

The first text-to-image tool DALL-E 2 was going to be released this year, according to the artificial intelligence research group OpenAI. The technology turns written text into a variety of visual information, such as realistic pictures and works of art with a sci-fi theme. Google and Facebook now Meta announced the debut of their own text-to-image generators not long after OpenAI revealed their model.

Stable Diffusion raises numerous ethical issues, much like any new technology that is available in open source. The tool is programmed not to produce any harmful content, such as propaganda, violent scenarios, or pornography, according to the original code. Since the source code is open, it would be possible to bypass these limitations.

The rest is here:
Startup Reveals A Revolutionary Open Source Image-to-Text Generator - Open Source For You

Passing the PyTorch – Protocol

Hello. and welcome to Protocol Enterprise! Today: why Meta just transferred stewardship of an important AI framework to the Linux Foundation, how last weeks California heat wave took out a Twitter data center, and the latest funding rounds raised in enterprise tech.

Meta is handing the reins for PyTorch, its popular open-source AI framework, to the nonprofit open-source software consortium Linux Foundation. PyTorch was designed to optimize deep learning, and gets its name from the AI programming language Python and open-source machine-learning library Torch.

The move shifts the commercial and marketing aspects of PyTorch to the Linux Foundations newly-launched PyTorch Foundation. But much of PyTorchs technical governance which has used a typical shared oversight model for years will stay the same, Meta engineer Soumith Chintala told me.

The transition could help chip away at the backlog of requests for PyTorch improvements.

Still, some worry that the Linux Foundation has too much power over enterprise tech.

The Cloud Native Computing Foundations flagship conference gathers adopters and technologists from leading open source and cloud native communities in Detroit, Michigan from October 24 28, 2022. Register now and join thousands of attendees, including maintainers for CNCF's 140 Graduated, Incubating, and Sandbox projects, either virtually or in-person.

Register to attend: In-person | Virtual

The global innovation race is well underway. What is the U.S. administration doing to stay ahead, and where is it falling short? What is the status of funding by Congress and in statehouses, and which areas still need investment? Is the U.S. doing enough to attract and retain top tech talent from around the world?

Join Protocol Policy for a virtual event on Sept. 27 at 10 a.m. PT as we dive into the U.S.s national strategy on innovation, whats working, what isnt and what policy changes we can expect from the year ahead. RSVP here.

The Cloud Native Computing Foundations flagship conference gathers adopters and technologists from leading open source and cloud native communities in Detroit, Michigan from October 24 28, 2022. Register now and join thousands of attendees, including maintainers for CNCF's 140 Graduated, Incubating, and Sandbox projects, either virtually or in-person.

Register to attend: In-person | Virtual

Thanks for reading see you tomorrow!

Visit link:
Passing the PyTorch - Protocol

Java Or Python For Android – Why Not Both! – iProgrammer

Should you choose Java or Python for your next Android project? You don't have to with Chaquopy, the Python SDK that lets you write Android applications in Python. Thanks to support from Anaconda, it is now both free and open-source.

Chaquopy is versatile. It allows you to write Android applications in full in Python or partially, together with Java. You can pick the most suitable tool for the part of the application at hand.

Each approach has its own distinct advantages. If you have server backend that is written in Python you can now use Python for the front end as well without paying the penalty of context switching to another tool or language, or in lacking expertise in another stack.The extra boon is that you can use your favorite Machine Learning Python libraries like SciPy, OpenCV or TensorFlow solely on your client/mobile phone without connecting to the cloud.

While the cloud still monopolizes the space where neural networks and their algorithms breed, things seem to be shifting with those elaborate algorithms looking to move on to and run offline on mobile devices. That includes their training too; the pictures, notes, data and metadata that reside in the device will also serve to train the network and aid its learning activities such as the recognizing, ranking and classifying of objects.

The difference is that now all of that is going to happen locally. As such, common deep learning user experiences that could be realized locally, would be scene detection, text recognition, object tracking and avoidance, gesturing, face recognition and natural language processing.

For instance, apps that help in organizing photos on the user's phone, utilizing an algorithm that combines artistic photography principles with deep learning technology which can sort photos based on topics, locations, and events , and can also recognize the best, based on ranking system it employees.

Working offline and shifting business from the cloud and onto the device, has distinct advantages .Online processing requires the presence of either a WiFi or mobile connection which can be sluggish as well a host of privacy concerns. Then looking at it from an ever practical perspective, multiple concurrent requests from thousands of client devices can easily overload the cloud based service and leave the client machine prone to long delays in getting a response, or even to fully scaled denials of service.

So imagine having Python's ML libraries at your disposal on Android. This is happening thanks to Chaquopy. This goes beyond Python however;Java enthusiasts rejoice since Chaquopy through its APIs allows them to access those libraries from their Java code.

Don't get me wrong, I'm not taking away Chaquopy's other charismas like that of building UIs, accessing native Android APIs or working in sync with Java; it's just that having access to Python's ML ecosystem in a mobile device stands out.

Chaquopy is distributed as a plug-in for Androids Gradle-based build system, and you can access all native APIs and even build your app entirely in Android Studio. Also through simple APIs you can call Python code from Java and Kotlin, and vice versa. It can be used in any app which meets the following requirements:

In your projects top-level build.gradle file, the Android Gradle plugin version should be between 4.1 and 7.2. Older versions as far back as 2.2 are supported by older versions of Chaquopy.

The Android plugin may be listed as com.android.application, com.android.library or com.android.tools.build:gradle.

minSdkVersion must be at least 16. Older versions as far back as 15 are supported by older versions of Chaquopy.

Chaquopy's previous licensed locked down versions would work as advertised but only for five minutes of runtime. Open source projects were given a free license but commercial ones had to get a paid license. Not anymore. Thanks to support from Anaconda, Chaquopy is now free and open-source, with its SDKs full source code available on GitHub under the MIT license.The first open-source version is 12.0.1, released late July which apart from removing the license restrictions, is identical to version 12.0.0.

For examples of how to use Chaquopy, see the following apps:

That being said,Beewareis the closest to rival in that it allows using Python for cross-platform development.This means that you can have a single user interface across Android,IOs,Windows and Mac, thus with Beeware your app will have a standard look across all supported platforms while with Chaquopy you'll have just that native Android app experience.

But Chaquopy's strongest selling points are the deep integration with Android's development tools and its larger support for third party Python libraries. As noted in BeeWare'smanualitself support for third party Python libraries is limited:

On desktop platforms (macOS, Windows, Linux), any pip-installable can be added to your requirements. On mobile platforms, your options are a little more limited - you can only use pure Python packages i.e., packages that do not contain a binary module.

This means that libraries like numpy, scikit-learn, or cryptography can be used in a desktop app, but not a mobile app. This is primarily because mobile apps require binary modules that are compiled for multiple platforms, which is difficult to set up.

Its possible to build a mobile Python app that uses binary modules, but its not easy to set up well outside the scope of an introductory tutorial like this one. This is an area that wed like to address - but its not a simple task. If youd like to see this added to BeeWare, please consider supporting the project by becoming a member.

Chaquopy on the other hand has got that elusive support. Looking at itsnative package repositorywe find that amongst others it has support for matplotlib, numpy, opencv, pandas,scikit-learn,scipy and tensorflow. As well as the packages listed here, Chaquopy also supports most pure-Python packages on PyPI.

Ultimately Chaquopy gives you options. Go full stack Python. Keep the user interface in Java and connect to Python on the server or to Python's libraries on device. Access Java libraries from Python and vice versa. Keep the Java and Kotlin bits focused on the Android part and keep the Python bits focused on what Python does best.

ChaquopyChaquopy on GitHub

Fast.ai's Practical Deep Learning for Coders Has Been Updated

Knock Yourself Out With 91 Python Videos

To be informed about new articles on IProgrammer,sign up for ourweekly newsletter,subscribe to theRSSfeedandfollow us on Twitter,Facebook orLinkedin.

Make a Comment or View Existing Comments Using Disqus

or email your comment to: comments@i-programmer.info

Read the rest here:
Java Or Python For Android - Why Not Both! - iProgrammer

Use These 4 Tips to Attract and Retain Software Developers – Entrepreneur

Opinions expressed by Entrepreneur contributors are their own.

It's no secret that the way business is done has changed dramatically over the past few years. The widespread digital transformation has forced businesses across industries to recognize the value of the people who are building our new digital world: software developers.

While many tech businesses are grappling with layoffs and hiring freezes, data shows that developers have never been more in-demand, with thousands of new developer positions opening up every day. On the heels of the Great Resignation, we are seeing a developer shortage that is incapable of meeting the sky-high demand. And unfortunately, the gap shows no signs of closing.

DigitalOcean recently surveyed over 2,500 developers around the world on their working environments, job satisfaction and the biggest challenges they're facing at work. What we found should be of concern to any business leader trying to keep pace in the digital world. Over 25% of experienced developers those who have been in the workforce for over a year have started a new job within the last year, and 42% of those who didn't, are considering it. In other words, developers are quitting their jobs at a rate that is almost double the general average.

To ensure your organization isn't hit with resignations, it's important for tech leaders to rethink their strategies for recruiting and retaining developers and technical staff. Here are four recommendations below:

Related: The Demand for IT Is Here to Stay

Our most recent Currents survey shows that compensation and a preference for flexible and/or remote work environments are among the main reasons developers are thinking about changing or have already changed employers. It's fair to question whether or not tech giants like Tesla, Meta, Salesforce, Apple, Google and others have shot themselves in the foot with their rigid and sometimes controversial back-to-office plans.

DigitalOcean has had a flexible work policy for a long time, with over half of our employees working remotely prior to the start of the pandemic. We are now fully remote and provide access to co-working space to anyone who wants it. Based on our low attrition rate, this approach has proven to resonate with our team and particularly those with deep technical skills.

Developers are independent, digital-native people. Whether it be fully remote, hybrid or fully in-office, the important thing is to first listen to developers' needs and then offer them the freedom to choose which working environment works best for them.

Related: The Employee Retention Master Tip for Technology Companies

The developer community is as diverse as the companies they support and is made up of some of the smartest technical minds in the world. This community, as varied as they are, finds open-source projects to be a place where they can gather, collaborate and contribute. However, not many companies are providing their developers the time or compensation to contribute to these projects, despite the fact that 64% of companies are using such code for more than half of their software.

It's increasingly important to listen to what it is that is important to your staff's needs and wants. While you may not be able to deliver on every single wishlist, listening will help you build trust with your employees, and as a bonus, you may identify opportunities for your business. Giving time for developers to contribute to projects they care about (like open source) while on the clock, can show you understand what is valuable to them and also help strengthen your tech stack.

Related: Why Low-Code Platforms Are the Developer Shortage Solution People Aren't Talking About

Developers are known to be lifelong learners, and their role has evolved significantly in recent years. They are constantly learning new skills and programming languages, as well as adopting cutting-edge technologies and methodologies, all in the name of keeping up with the pace of innovation. Businesses need to be sure that developers have the educational resources, courses, training, tutorials and mentorship to keep their skills up to date.

This is true not only for career developers but for people who are brand new to the field, too. The hot developer job market has opened up a door for the next generation of developers many of them self-taught, pivoting from a totally different field or coming from non-traditional educational paths like coding boot camps who may face a steep learning curve.

It is smart for business leaders to invest in their staff, particularly those in technical fields. The opportunity for ongoing education and reskilling will be enticing to any developer candidate.

Another common grievance from developers is a lack of time and resources to work on projects. This challenge is likely due to too much time being spent on more menial tasks like cleaning up code and creating documentation.

Arming developers with a simplified toolkit takes the manual work off of their plates. This helps developers build faster and also frees up time for them to spend on creative, strategic work that actually makes an impact on the business.

Software developers are a unique group of people who have long had many stereotypes held against them in the corporate world. Now is the time for businesses to reverse those stigmas to gain a better understanding of this group and what they need to be successful survival in the digital age depends on it.

See more here:
Use These 4 Tips to Attract and Retain Software Developers - Entrepreneur

The Open Source Ztachip Is a RISC-V Accelerator for Edge AI and Computer Vision Applications – Hackster.io

Embedded developer Vuong Nguyen has released an open source RISC-V accelerator designed to boost the performance of edge AI and computer vision tasks up to 50 times and you can try it out yourself by loading it onto a field-programmable gate array (FPGA).

"Ztachip is a RISC-V accelerator for vision and AI edge applications running on low-end FPGA devices or custom ASIC [Application Specific Integrated Circuit]," Nguyen explains. "An innovative tensor processor hardware is implemented to accelerate a wide range of different tasks from many common vision tasks such as edge-detection, optical-flow, motion-detection, color-conversion to executing TensorFlow AI models."

The accelerator is built around the free and open source RISC-V instruction set architecture (ISA), and comes with some impressive performance claims: Compared to a standard RISC-V core, without specific optimizations for machine-learning workloads, the ztachip can accelerate performance between 20-50 times even outperforming RISC-V chips which include the recently-ratified vector processing extensions.

The accelerator comes complete with what Nguyen calls "a new tensor programming paradigm," which is part of the secret behind the acceleration on offer. Despite its performance, though, the ztachip core is built to be resource-light running happily on relatively low-end FPGA devices, which should in turn translate to being realizable in silicon without too much cost or complexity.

The core can be run on the Arty A7 FPGA development platform, using a camera board as input. (: Vuong Nguyen)

Ztachip is available to run in simulation or on Altera or Xilinx FPGAs, using a wrapper layer to ease porting to additional platforms when required. A demonstration of the accelerator running on an Arty A7 FPGA development board showcases the use of a range of networks and tasks, including TensorFlow Mobinet image classification, SSD-Mobinet object detection, Canny edge detection, Harris-Corner point-of-interest detection, motion-sensing, and a neat multi-tasking demo which runs object, edge, point-of-interest, and motion detection simultaneously.

The ztachip source code is available on GitHub under the permissive MIT license, with instructions on getting started with deploying the core to an FPGA.

Read this article:
The Open Source Ztachip Is a RISC-V Accelerator for Edge AI and Computer Vision Applications - Hackster.io

Polkadot (DOT/USD), Cosmos (ATOM/USD) The Open-Source Blockchain Application Platform, Lisk is Develo – Benzinga

Lisk is a blockchain application platform that envisions a world where blockchain technology is accessible to everyone.

To make that happen, it has created a suite of open-source blockchain application development tools that enable developers to build blockchain apps using JavaScript and TypeScript meaning they dont have to learn a new programming language to start building with blockchain.

That, combined with Lisk Grant Program and tireless commitment to improving the Lisk Platform, is what the platform says is helping shape a vibrant and diverse ecosystem. While some of its features and functions are still in development, the ecosystem thats emerging might give users a taste of what the future could look like as blockchain technology becomes more scalable, interoperable and accessible.

The team at Lisk describes the Lisk plastform as an ecosystem where developers and entrepreneurs can connect to create their own projects with the tools built by us. To bring more of those developers and entrepreneurs into this ecosystem, Lisk is offering grants to support developers who want to build apps using Lisks open-source app-building platform.

Thats helped blockchain developers work on innovative projects like virtual reality (VR) games, educational platforms and social media platforms.

Topas City, for example, will be an immersive virtual world where users can explore a dystopian city and earn tokens by playing arcade games and trading in the marketplace. When theyre not earning, they can spend time socializing with other Topas residents at the 99 Bar or hanging out in their virtual apartment or inside gated communities, luxury apartments and a private bar if theyre lucky enough to nab one of the rare Elite cards.

Enevti will become a non fungible token (NFT) social media platform that moves NFTs beyond just art and collectibles, giving influencers new ways to engage with their audience. Specifically, the platform will offer influencers a way to make NFTs into smart utility assets that can be distributed to fans who can redeem them for things like one-on-one video calls, admission to exclusive events and physical gifts.

Meanwhile, Kalipo is a decentralized autonomous organization (DAO) platform that will provide the tools needed for groups to collaborate online in a more democratic and transparent way. Those tools include online voting mechanisms that make shared decision-making easier and online fund management tools that bring transparency to raising and using funds for the organization. These tools can be used by groups of all sizes and all purposes from start-ups and nonprofits to neighborhood organizations or parent-teacher committees.

As the research team continues to develop the platform and network, it reports nearing the finish line on achieving interoperability inside the Lisk ecosystem meaning the apps built on the platform will, in the coming future, be able to communicate and transact with each other.

Having published the core and supporting improvement protocols that would make that interoperability possible, the team is in the final phase of improving the interoperability solution and officially launching the Lisk blockchain application platform.

When that happens, users will be able to easily navigate between the different apps in the ecosystem and transfer NFTs and Lisk LSK/USD tokens between them quickly and efficiently.

Once the Lisk ecosystem is fully interoperable, the team has its sights set on building cross-chain bridges that would make it interoperable with other blockchain networks, like Ethereum ETH/USD or Cosmos ATOM/USD & Polkadot DOT/USD expanding the same kind of connectivity inside the Lisk Ecosystem across any blockchain network.

This post contains sponsored advertising content. This content is for informational purposes only and is not intended to be investing advice.

Featured photo by xresch on Pixabay

Read more:
Polkadot (DOT/USD), Cosmos (ATOM/USD) The Open-Source Blockchain Application Platform, Lisk is Develo - Benzinga

Frictionless Enterprise – the Tierless Architecture of composable IT – Diginomica

(Source: diginomica.com)

Although Frictionless Enterprise is about much more than technology, it is fundamentally shaped by technology. Therefore the Information Technology (IT) infrastructure an organization adopts is crucial to its ability to thrive in this new digitally connected era. This is no skin-deep change. Digital technology has evolved enormously since the advent of the Internet and the emergence of cloud computing, and is continuing to evolve rapidly. So too has the way that the IT function operates and engages with others across an organization. In this chapter, we map these changes and their implications for enterprise IT.

Putting computing on the open network of the Internet moving it from islands of disconnected isolation into a global fabric of near-ubiquitous connectivity has forced it to adopt a more networked, atomic architecture, which we'll explore in detail below. Just as important, this has also enabled new ways of working for those who design and operate IT.

The early days of Internet connectivity made it possible for techologists to co-operate globally on software design, leading to the growth of open source software. Much of our most important infrastructure is now built on open source software, leveraging the pooled knowledge and experience of the community to continue to evolve and enhance it. Better connectivity allowed software engineers to work in agile DevOps teams, where the people who write the software work side-by-side often virtually with those who put it into operation. Meanwhile, the emergence of public hyperscale cloud computing prompted the development of more automated ways of deploying software. This in turn enabled the evolution of Continuous Integration and Continuous Delivery (CI/CD) to allow the rapid delivery of changes and new capabilities in small increments.

All of these changes in how IT works have been enabled by growing connectivity and have reinforced the consequent atomization of software to allow more rapid change. For example, the reorganization of software development into smaller DevOps teams was accompanied by the emergence of widely accepted standards for easily connecting software components using Application Programming Interfaces (APIs). In an inverse validation of Conway's Law, changing the communication structure of IT made these new patterns of software design inevitable.

A flatter, more modular IT architecture is emerging in response to this more digitally connected environment. In the old world of disconnected systems, each system was built as a vertical stack with several discrete tiers. The end user interacted with an application interface in the upper tier. Behind this User Interface (UI) sat the next tier of application servers, where functions were processed. In turn, these application servers stored and fetched data in the final database tier. The entire stack was optimized for a specific application, and accessing the underlying data or executing functions for any other purpose required complex integration technology or cumbersome workarounds.

The new enterprise IT architecture decomposes all of these tiers and makes their components readily available via APIs. Instead of a 3-tier or N-tier stack of UI, application server(s) and database, there is a Tierless Architecture of engagement, functions and resources:

The new model is tierless an open network ecosystem in which any function or resource becomes available through APIs to any qualified participant. Whether those functions and resources are data stores, microservices, system resources, serverless functions or SaaS applications and processes, the API layer makes them equally available as autonomous, multi-purpose, composable services. They connect up to produce results and then present the outcomes to the end user through an engagement layer. This combination of headless engagement with serverless functions and resources defines the new architecture.

Here's what we mean by headless engagment and serverless functions and resources:

The latest developments are unfolding both at the presentation layer (the 'head') and at the underlying services layer (the 'servers'). Like many disruptive technologies in their early phases (think 'horseless carriage' and 'wireless receiver'), these two trends are named for what they replace rather than for what they bring ...

Today's emergent systems are headless because the presentation layer isn't fixed and therefore the user experience can take many different forms. A commerce system can be experienced on the web, on mobile, or through in-store gadgetry, while an enterprise application might be delivered as a web app, a mobile app, or within a messaging platform such as Slack or Microsoft Teams. Rather than headless, they are many-headed, with unlimited choice as to how to present the user experience.

The underlying systems are serverless because the servers on which all of the computing runs are hidden away behind a layer of application programming interfaces (APIs). An infinite variety of interchangeable resources is available through this API services layer, ranging from custom microservices built by an organization's in-house IT team, to serverless functions delivered from cloud providers, to complete SaaS applications, and much more besides. Instead of being limited to what you are able to build and provision from your own servers, there is a global network of on-demand services at your disposal.

This new architecture cuts across the old functional silos that defined traditional enterprise applications, replacing complex Enterprise Application Integration (EAI) between monolithic application stacks with a more flexible ecosystem of autonomous components that connect through standardized APIs and contracts. The implications are far-reaching:

... [T]he traditional bundle of functionality that makes up an enterprise application has been broken down into separate components that are then recombined in new ways to provide a different, more streamlined outcome that wasnt possible without the new technology. This is a phenomenon known to economists as unbundling and rebundling and its invariably a harbinger of disruptive innovation in a given field as new patterns of consumption become possible.

One of the first sectors to embrace this new composable architecture were digital experience (DX) developers those who build websites and mobile apps that deliver dynamic content and e-commerce. In the mid-2010s, a new breed of vendors started offering 'headless' content and commerce platforms, mostly built on open-source technologies. Shortly afterwards, the term Jamstack was coined by Netlify founder Matt Biilmann to describe the architecture, where JAM stands for client-side JavaScript, server-side APIs and static front-end Markup. More recently, leading vendors and consultancies in this space formed the MACH Alliance as an industry advocacy group for a more expansive definition called MACH, which stands for Microservices based, API-first, Cloud-native SaaS and Headless.

The recent inaugural MACH One conference demonstrated the growing enterprise adoption of this new architecture in composable commerce and DX, with speakers from retailers and consumer brands including Asda, Kraft Heinz, LEGO, Mars and River Island. All of them bear witness to the rapid implementation, smooth scalability and ongoing flexibility of the architecture, as well as its impact on IT's relationship with the business. As Rainer Knapp, Global Director of IT & Digital at Wolford puts it:

Using this freedom now to implement whatever comes in mind and makes sense for the business is something that will change the behavior I think a lot in the future ... I consider myself more being a business manager, honestly, with the advantage of having the lever of IT in his hand, than as a pure IT techie.

Another vector in the shift to Tierless Architecture and composable IT can be seen in the rise of messaging platforms as another way to interact with enterprise applications and resources. Known as conversational computing, this began with the emergence of voice assistants, chatbots and messaging apps, in which the user could ask an intelligent software agent to query data or perform actions without having to go into the underlying enterprise application. This makes those applications 'headless', with the messaging platform acting as the engagement layer and accessing their functions or data via APIs.

Digital teamwork platforms such as Slack and Microsoft Teams quickly recognized the benefits of connecting into external functions and resources to bring them into the user's flow of work right there in the messaging app. Microsoft has brought its Power Apps custom app builder into Teams and added powerful data capabilities with Dataverse for Teams. Slack also has an app builder and an ecosystem of partner app connections along with workflow automation and data connectivity. Its founding CEO Stewart Butterfield sees huge potential in connecting best-of-breed apps:

Slack when it's really working for individual organizations on the inside becomes this lightweight fabric for systems integration. And that's just as valuable across boundaries as inside.

We've already discussed in an earlier chapter the core role of digital teamwork in Frictionless Enterprise, and the Collaborative Canvas that underpins it. This teamwork platform is a critical element of the IT infrastructure because, once it has all the necessary connections into other systems, it forms the primary engagement layer for everyone's work. Just as we've seen with headless content and commerce, separating that engagement layer from underlying monolithic applications paves the way towards replacing them with a more composable set of functions and resources.

The impact of these new approaches on the IT function, as Wolford's Knapp says, is that it becomes a more engaged participant in achieving business goals, rather than simply a provider of technology at the behest of the business. In enterprises that operate large portfolios of SaaS applications, this has given rise to a new tribe of IT professionals who call themselves business systems specialists. Rather than operating in a discrete functional silo, they see themselves as embedded in the business and focused on its goals, and look to deliver tangible business value in short, agile projects. Their philosophy echoes what we've heard from MACH adopters.

Among teams like these, DevOps has been extended by concepts such as such author Marty Cagan's notion of product management, in which small, empowered product teams bring together software developers, product managers and designers to focus on specific goals. This is in line with the trend towards forming what Gartner calls fusion teams, which the analyst firm says "blend technology and business domain expertise to work on a digital product." Such teams are likely to proliferate with the spread of low-code and no-code development tools, where the involvement and support of pro coders from the IT function can help avoid common pitfalls. My preferred term for this combination of tech and business talent is 'co-code', as I recently explained:

There's no need to set up schisms between business people and IT when they can achieve far more by working in harmony ... Enterprise IT can put governance in place and manage the creation of the building-block components, while supporting business users as they make prototypes, test new functionality, or assemble their own automations.

In each of these examples, IT becomes engaged as a partner with business colleagues in achieving results. This pattern of delivery has much in common with the XaaS Effect that we discussed in an earlier chapter on customer engagement, but in this case applied to an internal function.

Releasing data from legacy application silos is becoming a priority as organizations seek to catch up with the on-demand, real-time cadence of Frictionless Enterprise. Every sphere of activity aspires to be data-driven, using connected digital technologies to achieve pervasive access to data that's as fresh as possible, and delivered in the context of everyday operational decision-making. In Tierless Architecture, data is set free from application silos and becomes just another resource that's accessible through the relevant API.

But despite the growth of platforms such as Snowflake and Confluent that help organizations marshall and analyze data at speed, there is still work to be done to turn data more readily into transferable information. Traditional applications have optimized their datasets for their own internal operations, which means that initiatives to build common ontologies for datasets such as a Customer Data Platform are still at a very early stage. In other fields, such as the work graphs built by digital teamwork vendors, no one has yet started to think about creating standards to allow the interchange of graph data. This is one area where the tooling for Tierless Architecture is still relatively immature.

As with any new technology paradigm, Tierless Architecture will face resistance, especially while the relevant skills are not mainstream and the toolsets and techniques are still evolving:

These new technology patterns require IT professionals and developers to abandon familiar, trusted ways of working. Their novel approaches are less well documented and therefore often appear less effective at first glance. Established vendors whose products cannot adapt to the new paradigm will stoke skepticism about its claimed advantages. There are many arguments and debates ahead.

The landscape becomes further confused when established vendors latch onto up-and-coming buzzwords such as 'headless' and apply them to existing products while retaining many of the characteristics of a tiered stack. This became so prevalent during the rise of cloud computing that the phenomenon became widely known as cloudwashing. To guard against attempted 'MACH-washing', the MACH Alliance has a rigorous certification program. Enterprises must be on their guard against fake composability.

While some vendors will drag their feet, many others are already adapting. There's a growing trend amongst established vendors to move towards composable platforms. Meanwhile, a new generation is coming through to take the place of the laggards. This new wave of vendors have grown up with a connection-first outlook:

They build on whatever technology comes to hand open source and cloud infrastructure, connected services. For them, competitive advantage doesn't come from owning the stack, it comes from being free to select the best available resources for the moment ...

The conventional wisdom is to maximize what you own, but in today's hyperconnected cloud world, there's a new maxim focus on whatever it is you can scale first, and faster, than anyone else. For everything else, use what's already out there.

Earlier this year, I asked Massimo Pezzini, former Gartner analyst and an expert on enterprise integration and automation, for his views on the composable future of enterprise IT. Here's what he told me:

The application portfolio of a company in five years from now is going to look much more different than it is today in terms of the architecture more building blocks composed together and less and less of these gigantic application suites, which are super-rich in functionality, but also very inflexible, very hard to deal with ...

At some point, I believe that the application landscape of an organization will look like a broad set of elementary business components accounting, payables, receivables, tax calculation, what have you possibly coming from different vendors. The end-to-end process, the end-to-end application, will be built by these fusion teams, using orchestration tools. Teams will use these tools to aggregate and compose together these component building blocks at the backend, and rearranging them and shaping them in the way which fits with the company's business needs.

To be ready for this future, enterprises need to begin their journey to Tierless Architecture now, and IT teams must engage with business colleagues to ensure it delivers maximum value.

This is the fourth chapter in a series of seven exploring the journey to Frictionless Enterprise:

You can find all of these articles as they're published at our Frictionless Enterprise archive index. To get notifications as new content appears, you can either follow the RSS feed for that page, keep in touch with us on Twitter and LinkedIn, or sign up for our fortnightly Frictionless Enterprise email newsletter, with the option of a free download of The XaaS Effect dbook.

More here:
Frictionless Enterprise - the Tierless Architecture of composable IT - Diginomica

UWMadison moves up U.S. News list, ranked 38th overall and 10th best public – University of Wisconsin-Madison

Bascom Hall is pictured in an aerial view of the University of WisconsinMadison campus. Photo: Jeff Miller

The University of WisconsinMadison has been ranked 38th overall and 10th among public institutions (both in three-way ties) inU.S. News & World Reports 2022-23 rankingsof best colleges.

Last year, UWMadison was ranked 42nd in a five-way tie and 14th among public institutions.

The rankings, released today, include 440 national doctoral universities and are in the 2022-2023 edition of Americas Best Colleges.

As one of the worlds top universities, UWMadison delivers a high-quality education that provides life-long value to our students, said Chancellor Jennifer L. Mnookin. While rankings are only one measure of excellence, Im pleased to see so many areas of our success reflected.

U.S. News gathers data from and about each school regarding undergraduate academic reputation, student excellence, faculty resources, expert opinion, financial resources, alumni giving, graduation and retention rates, graduate rate performances and social mobility. Each indicator is assigned a weight based on U.S. News judgments about which measures of quality matter most.

UWMadison continues to perform especially well in peer reputation and was ranked 28th overall and seventh among public institutions for the second year in a row.

The university also moved up in several categories including 51st in Financial Resources, up from 52nd last year, and 57th in Student Excellence, up from 59th last year.

UWMadison continues to improve in Faculty Resources and was ranked 84th overall, up from 107th last year. The university has risen 63 places in this ranking category in the past four years. Institutions ranking highly in this category are those with high faculty compensation and small class size along with highly qualified faculty and instructional staff.

We know how valuable our faculty members are, and thats why we have prioritized recruiting and retaining outstanding people to enhance our educational programs, says John Karl Scholz, Provost and Vice Chancellor for Academic Affairs. That commitment, along with our extraordinary University and Academic Staff, continues to bolster the universitys standings in these rankings.

The six factors used to calculate the Faculty Resources portion of the ranking are: index score for class size, advantaging institutions with smaller class sizes; the proportion of instructional staff with the highest degree in their field; the student:faculty ratio; the proportion of faculty who are full time; and faculty salary, which is defined as the average faculty pay (salary only) for assistant, associate and full professors in the 2020-21 and 2021-22 academic years, adjusted for regional differences in the cost of living based on open source data from the Bureau of Economic Analysis regional price parities December 2021 dataset.

UWMadison maintained its ranking of 39th in Graduation and Retention Rates for the second year in a row with an 89 percent six-year graduation rate and a 95 percent first-year retention rate.

U.S. News also evaluated undergraduate engineering, business, computer science programs and nursing.

UWMadisons undergraduate engineering program ranked 13th overall (three-way tie), up from 15th overall last year (four-way tie), and 7th (two-way tie) among public doctoral-granting institutions for the third year in a row.

Ranked programs include 21st (two-way tie) in biomedical engineering, ninth (two-way tie) in chemical engineering, 14th in civil engineering, 16th in computer engineering, 15th (three-way tie) in electrical engineering, 25th (two-way tie) in environmental engineering, 12th (two-way tie) in industrial/manufacturing/systems engineering, 14th in materials engineering and 20th (three way-tie) in mechanical engineering.

UWMadisons undergraduate business program ranked 19th overall (four-way tie) and ninth (four-way tie) public, both for the second year in a row.It was ranked first in both real estate and insurance/risk management (two-way tie).

Other ranked specialties include eighth in marketing, 17th in accounting, 24th (four-way tie) in finance and 30th (six-way tie) in management.

UWMadisons undergraduate computer science program ranked 16th overall (seven-way tie), up from 18th last year, and eighth (four-way tie) among public universities, up from ninth last year. UWMadison was ranked 23rd (three-way tie) in artificial intelligence, sixth in computer systems, 17th (four-way tie) in data analytics and 11th (two-way tie) in programming.

UWMadisons undergraduate nursing program ranked 22nd (seven-way tie) overall and 17th (five-way tie) among public universities.

Other categoriesinclude:

Best colleges for veterans:18th overall and 10th among publics (both in three-way ties), up from 20th overall and 14th among publics last year. Institutions included on this list must be certified for the GI Bill, participate in the Yellow Ribbon Program or be a public institution that charges in-state tuition to all out-of-state veterans, must have enrolled a minimum of 20 veterans and active service members in the 2021-22 academic year and must be ranked in the top half of the institutions overall U.S. News ranking category.

Best value schools: 23rd among publics, based on a ratio of quality to price (overall rank divided by net cost), the percentage of undergraduates receiving need-based scholarships or grants and the percent of a schools total cost of attendance that was covered by the average need-based scholarship or grant aid.

Academic programs to look for: Institutions are nominated by presidents/chancellors/provosts and enrollment management/admissions leaders in several student experience areas, including Study Abroad, ranked 25th overall (five-way tie) and fifth among publics (four-way tie), and First Year Experience, ranked 42nd (seven-way tie) overall and sixth among publics.

Schools with the most international students: UWMadison is listed as a school with the most international students, with international students making up 10 percent of the student body. U.S. news does not rank this metric in its publication, but UWMadison had the 53rd highest percentage of international students among national universities.

To see the full rankings, click here.

Original post:
UWMadison moves up U.S. News list, ranked 38th overall and 10th best public - University of Wisconsin-Madison