Open Source and Open Standards: The Recipe for Success Featured – The Fast Mode

Over the last ten years, technological advancements across the world have been remarkable, with the number of things that can be connected growing exponentially. By 2025, it is expected that more than 75 billion devices will be connected to the Internet worldwide. As the decade unfolds, demand will only increase for different types of streaming services and high bandwidth-consuming applications. Therefore, the need to support these coming applications will continue to mount.

To effectively do this, operators and vendors must focus on lowering costs for service deployment, fostering greater interoperability for deployment flexibility, and shortening service deployment times for market agility, whilst maintaining Quality of Experience (QoE). Paving the way for these new requirements is Broadband Forum, which is unifying the best of open standards and open source to deliver the agile technologies that enable the necessary network transformations and services of the future.

The rise of cloudification

Emerging technologies such as 5G and the proliferation of devices driven by the Internet of Things (IoT) have applied significant pressures to the network architecture. As a result, cloud technologies including Software Defined Networking (SDN) and Network Functions Virtualization (NFV) have become a key business consideration.

By introducing cloud concepts into the Central Office (CO), operators can make their networks more agile and scalable by improving flow control and enhancing functional flexibility. With operators well-versed in the benefits of this, it is no surprise how quickly the number of networks leveraging these technologies has grown. For example, the global telecoms cloud market is expected to grow from 9 billion dollars in 2016 to $29 billion dollars by 2021.

However, the challenges around deployment, migrating to a cloud-based CO and how new and old technologies can co-exist remain.

Overcoming the challenges

Addressing these challenges through standardization is Broadband Forums Cloud Central Office (CloudCO) initiative. This open interface is a recasting of the Central Office hosting infrastructure that utilizes NFV, SDN and cloud technologies to support network functions. The CloudCOs functionality can be accessed through a northbound API, allowing Operators, or third parties, to consume its functionality, while hiding how the functionality is achieved from the API consumer. The system acts as a foundation for an ecosystem to evolve, helping the thriving community of suppliers and service providers in their quest to embrace the cloud with all the benefits that interoperability brings.

Unifying open source and open standards in this way is key for network automation and ensuring the efficient delivery of broadband access technologies like cloud, NFV and SDN. Open standards are needed in order to align the industry on common architecture and migration approaches. Without these standards, operators would not be able to protect their existing asset investments and launch new opportunities for service development. Together with open source, standardization will enable seamless co-existence, by ensuring existing equipment can interoperate with new technologies, eliminating the need for all the equipment to be replaced.

Part of CloudCO, one open source solution which has gained significant traction is Broadband Forums Open Broadband - Broadband Access Abstraction (OB-BAA). This allows accelerated deployment of cloud-based access infrastructure and services and facilitates co-existence and migration. OB-BAA can be adapted to many software defined access models and the speed at which service providers can now deploy standardized cloud-based infrastructures has notably improved. This added functionality has enabled flexible solutions needed by SDN/NFV-based networks.

The OB-BAA project is designed to be deployed within the Forums CloudCO environment as one or more virtualized network functions (VNFs). It specifies Northbound Interfaces (NBI), Core Components and Southbound Adaptation Interfaces (SAI) for functions associated with the access network devices that have been virtualized. Building on previous releases of the OB-BAA code distributions, Broadband Forum has most recently published Release 3.0 of its Open Broadband - Broadband Access Abstraction (OB-BAA) open-source project. Release 3.0 provides capabilities to manage Simple Network Management Protocol (SNMP) based Access Nodes via the vendor's adapters, thus accelerating migration to SDN-based automation platforms. Release 3.0 aims to take OB-BAA to the next level, providing operators with the tools to monitor and enhance network performance, cost-effectively and efficiently.

Collaboration is key

The revolution of the broadband industry is now upon us - and a wide variety of different requirements have to be addressed by operators across the world. The involvement of the whole industry in the standards process is needed to ensure that the new standards developed for the world of automation are scalable and far-reaching.

Overall, OB-BAA will make it possible for operators to migrate to and manage programmable network environments, where new services can be deployed rapidly through interaction with the common abstraction of Access Nodes. Operators and equipment manufacturers will be able to reap the benefits of greater networking flexibility and be able to streamline development by implementing standard interfaces, while differentiating their service offering via stable standardized platforms.

Facilitating an agile, flexible and integrated approach, OB-BAA allows service providers to embrace the best of open source and open standards, creating a programmable broadband network which delivers on the promise of next-generation broadband, while reducing service providers costs and protecting their investments.

To learn more about Broadband Forums work on Open Broadband Software, please click here.

View post:

Open Source and Open Standards: The Recipe for Success Featured - The Fast Mode

Coding within company constraints – ComputerWeekly.com

Assuming that there is at least some level of software development occurring in every business, it needs to be run optimally, be of high quality and be as cost-efficient as possible. At the same time, the pandemic has shown that some software may be called upon to do things that go way beyond its original design goals.

There seems to be a few areas of technology that resonate among the experts Computer Weekly has spoken to about what defines modern software development. Topping the list is containerised microservices. One of the main benefits of this approach is that code can be developed, tested, deployed and run in production in a manner that limits its impact on the stability of the overall IT system.

Conways law states that any organisation that designs a system will produce a design whose structure is a copy of the organisations communications structure. According to Perry Krug, director for customer success at Couchbase, Conways law cannot be fought, but he believes that by understanding its influence, it is possible to work within its constraints.

Its influence extends to the realms of software development, which means modern software development practices need to be cognisant of the organisational structure that exists within a business.

At one level, this may seem to contradict the more collaborative working practices that define modern software development. You need to collaborate to successfully develop applications, says Krug. If new ways of working make close collaboration difficult, collaborate loosely. If shared resources are causing constraints and bottlenecks, couple software more loosely to its foundations. Trends like microservices are an explicit recognition of these constraints and should now be coming into their own.

Some industry experts believe that microservices architecture empowers software innovation.

One of these is Arup Chakrabarti, senior director of engineering at PagerDuty. Microservices enable a far greater focus on customer experience than was previously the case, he says. Isolating areas of the overall code base enables the creation of mini innovation factories across the business each operating at its own pace. With less to integrate, theres less stepping on toes.

However, in Chakrabartis experience, the use of microservices can itself lead to an explosion in complexity and all the challenges that come with that.

Bloomberg, for instance, has split some of its monolithic applications into containerised microservices that run as part of a service mesh. But this architecture has brought its own set of challenges, particularly as it can sometimes be hard for developers to fully understand what is actually going on.

Peter Wainwright, senior engineer on the developer experience (DevX) team at Bloomberg, says distributed trace can help, since it allows an engineer to see all of a services dependencies. Downstream services can also see which upstream services rely on them.

A prime example is our company-wide securities field computation service. Calculations around the universe of securities can take place in many places. Knowing where to route requests for such computations is non-trivial, so we use a sort of smart proxy that becomes a black box router for requests, he says.

Services provide data for requests without needing to know whos asking. Unfortunately, this presents an obstacle when there are performance problems. Distributed trace restores visibility into who I am calling and who is calling me that the monolith previously gave engineers implicitly.

To ensure its engineers could focus on their applications, Bloomberg has built scalable platforms using open source products like Kubernetes, Redis, Kafka and Chef. This, says Wainwright, enables developers to use turnkey infrastructure for the heavy lifting and drop in their application code.

In terms of reducing bugs, there is plenty that can be gleaned from how open source code is tested.

Successful open source projects, like the GNU Compiler Collection (GCC) that builds large parts of a Linux distribution, for decades have required running a test suite before submitting a patch for inclusion, says Gerald Pfeifer, chief technology officer (CTO) at SUSE.

He says open source projects like LibreOffice use tools such as Gerrit to track changes and tightly integrate those with automation tools such as Jenkins that conduct builds and tests.

For such integration testing to be effective long term, we need to take it seriously and neither skip it, nor ignore regressions introduced, says Pfeifer.

He believes extensive automation is a key success factor, as is enforcing policies, to ensure code quality. If your workflow involves code reviews and approvals, doing automated testing before the review process even starts is a good approach, he says.

According to Pfeifer, LibreOffice and GCC share an important trait with other successful projects focusing on quality. Whenever a bug is fixed, a new test is added to their ever-growing and evolving regression test suites, he says. That ensures the net which is cast constantly becomes better at catching and hence avoiding, not only old issues creeping in again, but also new ones from permeating. The same should apply to new features, though when the same developers contribute both new code and test coverage for those, there tends to be a risk of having a blind eye.

Describing how its service mesh architecture is tested, Bloombergs Wainwright says: Instead of bundling changes on a fixed schedule, better testing enables us to make changes more rapidly. Changes are smaller, so theres less that can go wrong and mitigation is simpler.

While code quality is often measured in terms of defect rates, error budgets and the like, Wainwright belies the real benefit of easier testing is psychological. Teams confident that their tools have their back will make more progress in a shorter time, he says. That means we deliver more value to our customers faster.

However, as PagerDutys Chakrabarti points out, one of the biggest changes of recent years is customers intolerance for anything less than perfection when it comes to performance. Many believe engineers have their own hundred millisecond rule to contend with. Come back later is no longer an acceptable response, he says.

According to Chakrabarti, the idea of engineering web-scale applications is pretty much taken as given these days, particularly in the wake of the coronavirus, which has seen companies scrambling to support new ways of working. Data from PagerDuty shows that new code and heavier volumes of traffic have resulted in more incidents as much as 11 times more in some sectors.

As an industry, weve got better at fixing things without affecting customer experience, including through an automated approach to digital operations management. Its still early days for auto-remediation, but we are starting to hand over more control to technology. With time and further advances in machine learning, we ought to be able to teach some of our systems to self-heal based on past events even in circumstances that only occur once in a blue moon, says Chakrabarti.

Beyond the technical aspects of developing bug-free code, built in a modern way, such as the self-contained IT architectures and microservices approach discussed earlier, the coronavirus pandemic has made team communications a top priority.

In a socially distanced world, product managers are more important than ever. Developers will know what needs to be done to create a solution if they know what theyre supposed to solve, says Couchbases Krug. However, solving these issues means getting inside the customers head. Expecting developers to become psychologists is a step too far. Instead, there needs to be clear communication between product managers and developers over what customers issues are and ideally what they want their end state to be.

This means fast communication with distributed teams is essential, adds Krug. If it isnt in place, a priority for all teams in a business should be making sure it is.

Follow this link:

Coding within company constraints - ComputerWeekly.com

AWS Controllers for Kubernetes Will Be A ‘Boon For Developers’ – CRN: Technology news for channel partners and solution providers

A new Amazon Web Services tool that allows users to manage AWS cloud services directly within Kubernetes should be a boon for developers, according to one AWS partner.

AWS Controllers for Kubernetes (ACK) is designed to make it easier to build scalable and highly available Kubernetes applications that use AWS services without the hassle of defining resources outside a cluster or running supporting services such as databases, message queues or object stores within a cluster.

The AWS-built, open source project is now in developer preview on GitHub, which means the end-user-facing installation mechanisms arent yet in place. ACK currently supports Amazon S3, AWS API Gateway V2, Amazon SNS, Amazon SQS, Amazon DynamoDB and Amazon ECR.

Our goal with ACK (is) to provide a consistent Kubernetes interface for AWS, regardless of the AWS service API, according to a blog post by AWS principal open source engineer Jay Pipes, Michael Hausenblas, a product developer advocate for the AWS container service team, and Amazon EKS senior project manager Nathan Taber.

ACK got its start in 2018 when Chris Hein, then an AWS partner solutions architect, debuted AWS Service Operator (ASO) as an experimental project. Feedback prompted AWS to relaunch it last August as a first-tier, open-source software project, and AWS renamed ASO as ACK last month.

The tenets we put forward are ACK is a community-driven project based on a governance model defining roles and responsibilities; ACK is optimized for production usage with full test coverage, including performance and scalability test suites; (and) ACK strives to be the only code base exposing AWS services via a Kubernetes operator, the blog post states.

ACK continues the spirit of the original ASO, but with two updates in addition to now being an official project built and maintained by AWS Kubernetes team. AWS cloud resources now are managed directly through AWS APIs instead of CloudFormation, allowing Kubernetes to be the single source of truth for a resources desired state, according to the blog post. And code for the controllers and custom resource definitions is generated automatically from the AWS Go SDK, with human editing and approval.

This allows us to support more services with less manual work and keep the project up-to-date with the latest innovations, the AWS blog post stated.

ACK is a collection of Kubernetes Custom Resource Definitions and Kubernetes custom controllers that work together to extend the Kubernetes API and create AWS resources on behalf of a users cluster, according to AWS. Each controller manages customs resources representing API resources of a single AWS service.

Kubernetes users can install a controller for an AWS service and then create, update, read and delete AWS resources using the Kubernetes API in lieu of logging into the AWS console or using AWS Command Line Interface to interact with the AWS service API.

This means they can use the Kubernetes API to fully describe both their containerized applications, using Kubernetes resources like Deployment and Service, as well as any AWS managed services upon which those applications depend, AWS said.

AWS plans to add ACK support for Amazon Relational Database Service and Amazon ElastiCache, and possibly Amazon Elastic Kubernetes Service (EKS) and Amazon Managed Streaming for Apache Kafka.

The cloud provider, which is seeking developer input on the expected behavior of destructive operations in ACK and whether it should be able to adopt AWS resources, also is working on enabling cross-account resource management and native application secrets integration.

AWS Partner Reaction

ACK is a strategic move for AWS, especially as it competes with other Kubernetes offerings from competitors including Google Cloud, which already offers native integration from its Google Kubernetes Engine (GKE) to its cloud services such as Spanner, BigQuery and others, according to Bruno Andrade, a cofounder and CEO of AWS partner Shipa, a Santa Clara, Calif. startup that launched this year and directly integrates into AWS Kubernetes offering and its services.

We believe ACK makes total sense, especially for users that are looking at building a true cloud-native application, where there is native integration to cloud services for their application directly from their clusters, which can reduce drastically the time to launch applications or roll out updates, said Andrade, whose company allows teams to easily deploy and operate applications without having to learn, write and maintain a single Kubernetes object or YAML file.

ACK and GKE connector are focused on services running within their clusters and clouds, Andrade said, so one thing that still (needs) to be fully addressed are cases when customers have clusters running across multiple clouds and on-premises, and how the workloads running across these clusters will properly connect across the cloud-native services offered by the different services.

When using Kubernetes clusters in production, workloads typically need to integrate with other cloud services and resources to deliver their intended solutions, said Kevin McMahon, executive director of cloud enablement at digital technology consultancy SPR, an AWS Advanced Consulting Partner based in Chicago.

Integrating with the cloud services provided by vendors like AWS requires custom controllers and resource definitions to be created, he said. AWS Controllers for Kubernetes makes it easier to enhance Kubernetes workloads using AWS cloud services by providing vendor-managed, standardized integration points for companies relying on Kubernetes. Now companies looking to use Kubernetes can completely describe their applications and the AWS managed services that those applications rely on in one standard format.

With ACK, AWS continues to simplify the deployment and configuration of its services by integrating natively with Kubernetes, said Alban Bramble, director of public cloud services at Ensono, AWS Advanced Consulting Partner and managed services provider with its headquarters in Downers Grove, Ill.

This change will be a boon for developers looking to speed up releases and manage all resources from a single deployment, Bramble said.

But one area of possible concern, according to Bramble, is this could negatively impact policies already in place by SecOps teams, resulting in resources being deployed without their knowledge, thereby reducing their ability to effectively monitor and secure the services running in the environment.

Careful consideration and planning needs to take place between those two groups in order to ensure that processes are in place that dont stifle the developers ability to work within agile release cycles, while also accounting for the governance and security policies already in place, he said.

More:

AWS Controllers for Kubernetes Will Be A 'Boon For Developers' - CRN: Technology news for channel partners and solution providers

Telegram launches one-on-one video calls on iOS and Android – The Verge

Secure messaging app Telegram has launched an alpha version of one-on-one video calls on both its Android and iOS apps, the company announced, saying 2020 had highlighted the need for face-to-face communication.

In a blog post marking its seventh anniversary, Telegram described the process for starting a video call: tap the profile page of the person you want to connect with. Users are able to switch video on or off at any time during a call, and the video calls support picture-in-picture mode, so users can continue scrolling through the app if that call gets boring. Video calls will have end-to-end-encryption, Telegrams blog posts states, one of the apps defining features for its audio calls and texting.

Our apps for Android and iOS have reproducible builds, so anyone can verify encryption and confirm that their app uses the exact same open source code that we publish with each update, according to the post.

In April, Telegram announced it would launch group video calls later this year. This isnt quite that, but in the most recent blog post, the company indicated that video calls will receive more features and improvements in future versions, as we work toward launching group video calls in the coming months.

Telegram said in April it had reached 400 million monthly active users.

Go here to see the original:

Telegram launches one-on-one video calls on iOS and Android - The Verge

‘Trapped in a code’ the fight over our algorithmic future – Open Democracy

"We ain't your target market, and I ain't your target market", sang the band Sisteray on their 2018 song Algorithm Prison. They don't wish, as they put it, to be trapped in a code". Who would? Yet Sisteray know that we are all exposed to a powerful and judgmental data gaze as the camera zooms out at the end of their official video, the bandmembers all stare blankly at their phones, the words target market etched into the dirt on the back windscreen of their van.

The song shows how algorithms have risen in fame and notoriety in the last few years. And it is illustrative of a widely held concern over what these algorithms are doing to us. Often depicted as shadowy and constraining structural forces, algorithms are a source of significant anxiety. People often worry that these bits of code have a powerful but unknown sway over their lives. With an algorithmic grading system implemented to produce the recent A Level results, this anxiety is something we have seen magnified in recent days.

It seems that there was an implicit assumption on the part of those awarding the grades, that simply evoking the concept of the algorithm would be enough to reassure people of the objectivity and systematic fairness of the results. The calculative logic behind algorithmic decisions is often based on ideals of objectivity, neutrality and accuracy. In assuming that others would also see algorithms in those positive terms, the existing unease and scepticism about such systems was missed.

The backlash against how an algorithm was used to decide people's exam results and the unevenness of the outcomes for people from different backgrounds gives us an insight into the ongoing tussle over our algorithmic future. This standardisation of grades tells us something about what we are willing to tolerate when being judged by algorithms.

Where we are conscious of their presence, algorithms are mostly tolerated rather than celebrated. Theres often a kind of grudging awareness and acceptance although feelings toward algorithms already go well beyond uneasiness for those on the sharp-end of automated decisions, especially where prejudice, discrimination and inequality are coded into what Safiya Umoja Noble has called algorithms of oppression.

Perhaps the most widely understood algorithmic processes to date, however, are those associated with tech platforms and social media. It is here that algorithms are most noticeably active in making predictions for us and about us, and where media coverage has tended to focus. But it turns out that those being examined did not wish their results to be calculated in a similar way to the selection of their next YouTube clip, Netflix film, TikTok video, Instagram picture feed or Spotify song. The models are clearly totally different, but this is the parallel that all of the talk of algorithms was likely to draw. Other peoples data are considered ok for predicting cultural consumption, but the same cannot be said for predicting educational attainment.

It seems that there are forms of algorithmic prediction that are considered to be acceptable and others, clearly, that are not. All automation creates tensions, but some decisions or predictions (here are the grades the algorithm says you would have got") appear to break beyond the tacitly established limits of broad acceptability.

There is a concern, as expressed in that Sisteray song, that algorithms are a kind of trap and that they routinely use our data to lock us into fixed patterns. In his recent book on the long development of such systems, the historian Colin Koopman describes how data gathering processes have the effect of fastening. Both the boxes we are put in, and the gaps in the forms that are completed about us, hold us in place, he argues with many of the categories and logics behind data usage having been established a century or more ago. This fastening process is also echoed in Deborah Luptons recent notion of Data Selves, in which we have become inseparable from our data. Both Koopman and Lupton point to how data are used to make us up. Recent events could be seen as a rejection of one aspect of that fastening. By reducing these students simply to a data point within a historic cohort, the loss of a sense of the individual was overpowering. The students were not happy about being fastened in place by these particular algorithmic processes.

When you combine the existing scepticism for algorithms with an algorithmic system that is so overt in its uneven outcomes, then this level of reaction was always likely. It seems that the adjusted results will not now stand, but the full impact of what has happened is not yet clear. What is clearer is that the notion or concept of the algorithm will continue to be a site of tension. In such a context, a crude belief in algorithms is unlikely to go unchallenged. It would seem that many of these A Level students agree with Sisteray they dont wish to be trapped in code.

Read the original post:

'Trapped in a code' the fight over our algorithmic future - Open Democracy

Newham test and trace app was designed by man who grew up in the borough – Newham Recorder

Video

PUBLISHED: 15:00 21 August 2020

Jon King

Randeep Sidhu designed the test and trace app being rolled out across Newham today (August 21). Picture: Nathan Dainty

Nathan Dainty - VeryCreative

The man who designed the latest test and trace app, which is being trialled in the borough, grew up in Newham.

Email this article to a friend

To send a link to this page you must be logged in.

Randeep Sidhu is head of product for the NHS initiative which launches across the borough today (August 21).

Mr Sidhu, speaking to the Recorder, said: Weve been working really hard to try and build something that helps the communities most affected by Covid.

The version of the app being rolled out is significantly different to the one originally used in the Isle of Wight.

A self-confessed complete geek, Mr Sidhu explained that the Newham app includes exposure notifications API technology developed by Google and Apple.

This meant we could do things in a fundamentally different way that was safer, more privacy preserving and, hopefully, less battery draining, more accessible, just better, Mr Sidhu said.

He added that the app tracks the virus not the person, unlike the one originally used on the Isle of Wight. It doesnt ask for a name, address or date of birth.

Countries across the world including Germany and Denmark are using the same technology in their own apps, which have been approved by the tech giants.

The governments cyber security team has also given it the green light.

Im a brown person who grew up in London. I have all the trauma and drama that comes with that. So I understand about concerns about privacy in my heart. In terms of how secure it is, Im very comfortable, Mr Sidhu said.

The app is open-source, which means its source code is available to the public.

The tech obsessive was born in Southall, but part raised in the borough. His uncle runs a greengrocers in Green Street.

Newham is very close to my heart. Ive been to every bit of it. This is both personal and professional pride to be able to both build something which helps the communities affected, but also work in Newham, he said.

The borough had the highest Covid-19 death rate in May, according to the Office for National Statistics.

From today everyone in Newham will be sent an email or letter with a unique code allowing them to install the app on their smart phone.

The app provides infection spike alerts based on postcodes to warn if there is a rise in local infection rates and a digital QR check-in, which can be used at venues like restaurants, shops and pubs to register you have visited.

It has a symptom checker and link to allow easysaccess to coronavirus tests.

And there is a timer for people who are self-isolating to help count down that period.

The app will work alongside contact tracing and testing already offered in Newham.

It should alert you if you have been near someone who has tested positive for the virus and has the app.

It would be available in a number of languages, starting with English, Urdu, Punjabi, Bangla and Gujarati.

On how it can help anyone without a smart phone, Mr Sidhu said they would be protected by those around them who as a result of using it could avoid catching the virus.

Absolutely, we should try and get this app into as many peoples hands as possible. But there is still a benefit of as many people as can using it.

This app works. What were trying to understand is how it works in real users hands, he added.

While the app will hopefully reduce the R-level the average number of people one infected person will pass the virus on to the trials purpose is to gauge its use.

On why Newham was chosen, Mr Sidhu said: The mayor of Newham and Newham Council have been amazing. Rokhsana has been so supportive of this.

They are working hand in hand with us to try and make sure we can reach groups.

This is an opportunity for Newham to have a national, and potentially international, impact.

This app is safe and secure. It does not track you, it tracks the virus. Dowloading and using it keeps you, your family and your community safe. Your doing this helps others, Mr Sidhu added.

If you value what this story gives you, please consider supporting the Newham Recorder. Click the link in the orange box above for details.

Read more:

Newham test and trace app was designed by man who grew up in the borough - Newham Recorder

13 thoughts on Fitting Snake Into A QR Code – Hackaday

QR codes are usually associated with ASCII text like URLs or serial numbers, but did you know you can also encode binary data into them? To demonstrate this concept, [MattKC] embarked on a journey to create a QR code that holds an executable version of Snake. Video after the break.

As you might expect, the version 40 QR code he ended up using is much larger than the ones you normally see. Consisting of a 171 by 171 grid, its the largest version that can still be read by most software. This gave [MattKC] a whopping 2,953 bytes to work with. Not a lot of space, but still bigger than some classic video games of the past.

To start, he first wrote Snake to run in a web browser using HTML, CSS, and JavaScript, which was able to fit in the available space. Modern browsers do a lot of the lifting with built-in features, and [MattKC] wanted more of a challenge, so he decided to instead create a Windows executable file. His first attempts with compiled C code were too large, which led down the rabbit trail of x86 Assembly. Here he found that his knowledge of Assembly was too limited to create a small enough program without investing months into the project. He went back to C and managed to compress his executable using Crinkler, a compressing linker commonly used in the demoscene. This shrunk the file down to 1,478 bytes.

Zbar, a command-line barcode reader for Windows was used to test the final Snake QR code. [MattKC] discovered a bug in Zbarcam that prevented it from reading binary data via a webcam input, so through the power of open source, he submitted a bug fix which is now integrated into the official release.

All the files are available for anyone to play with on [MattKC]s website. The video below goes into a lot of detail on the entire journey. Since this project proves software can be embedded in QR codes, it means that malware could also be hidden in a QR code, if there is an exploitable bug somewhere in a smartphone QR reader app.

QR codes are an interesting tool with a variety of uses. Take a deep dive into how they work, generate a 3D printable version, or build a QR jukebox, if you want to learn more.

Read the rest here:

13 thoughts on Fitting Snake Into A QR Code - Hackaday

In-App Bidding Gathers Steam, But Adoption Looks Nothing Like Header Bidding On The Web – AdExchanger

Mobile app advertisers have been slower than their web counterparts to embrace programmatic-style RTB auctions.

Thats starting to change as more app publishers test in-app bidding and see significant lifts in ARPDAU (average revenue per daily user).

Publishers are pushing their ad networks to get into bidding, and were beginning to see a snowball effect, said David Gregson, a product manager at MoPub.

But the path to wider adoption of in-app programmatic auctions will be different than on the web, in part because of the long-established dynamics between app publishers and their partners, the emergence of different standards in apps vs. web and the fact that mobile technology is very different from web technology.

Here are five of the main differences between in-app bidding and its web-based sibling.

Ad network dominance

Web publishers took to header bidding like ducks to water when the practice started to gain traction around 2014. Frustration with Googles advantage in the auction helped grease header biddings wide adoption.

The industry was looking to find alternative solutions to the DFP and AdX monopoly, said Prajwal Barthur, VP of product at InMobi.

But in the app ecosystem there hasnt been the same motivation.

In the mobile app world, ad networks dominate, and mediation platforms centralize access to inventory across ad networks. App publishers are accustomed to maximizing their yield in waterfall-based mediation platforms.

Bandwidth issues

One reason ad networks have maintained a strong foothold with app publishers has to do with mobile connection speeds and reliability.

With desktop header bidding, the pricing and bidding happens through multiple exchanges simultaneously as the page loads. But on mobile, where bandwidth might be lower and there is no header, it created a better user experience for publishers to work through an SDK and asynchronously cache ads ahead of time.

Mediation platforms allowed publishers to choose the order in which they wanted to call ad networks, but didnt give a per-impression price, Gregson said.

Ad networks are still usually paid out on a cost-per-install or cost-per-action basis rather than by impression.

That is why were not in a unified auction-centric world yet, although were getting there, Gregson said. Its a learning curve for ad networks, trying to work out how to bid on a per-impression basis if expenses are paid out on a CPI basis.

Different standards

Theres also isnt a standard way to bid into apps.

As opposed to the web, where there are shared standards, each mobile ad network applies its own set of rules for how to load an ad, notify publishers of availability, render the ad and pre-cache it. This makes the adoption of in-app bidding less standardized and hence more complex, said Nimrod Zuta, VP of product at ironSource.

Whereas header bidding on the web has coalesced around client-side JavaScript wrappers, including Prebid, and server-side wrappers, which allow SSPs, exchanges and wrapper solutions to communicate using the RTB protocol, Zuta said, the same is not so in the app world.

When demand comes from an ad network SDK, the network has a direct relationship with the publisher. A piece of code is integrated into the app itself. This type of relationship doesnt exist in web browsing. Mobile ad networks dont buy on the open exchange or provide real-time bids for available impressions.

Transparency

And although by now almost all web-based header bidding has been standardized on first-price auctions thats even true for Google in-app auctions dont clear in a standardized or transparent way, said David Simon, CRO of Fyber.

Its common for bidders to bid into an SSP auction via a line item and then again in an in-app header, thus bidding against themselves, he said.

One of the major selling points of header bidding is transparency. But one of the biggest barriers to wider adoption of unified auctions in apps is the lack of open source standards to support them.

Desktop rode the open source wave with Prebid.js leading the header-bidding initiative for transparency, said InMobis Barthur. Lots of SSPs supported the same and started building on top of Prebid.

But this is still not the case on the mobile side, he said.

In the desktop world, theres Prebid, Index Exchanges header tag and Amazons Transparent Ad Marketplace. But apps didn't get Prebid support until 2017 with the release of Prebid Mobile.

Even with an open source in-app bidding solution available, however, many app publishers still work with mediation platforms to open up their waterfall. And not every mediation platform supports every ad network.

Playing both sides

Also unlike on the web, all app developers are both buyers and sellers of inventory. As a result, the incentive for developers to do combination deals with their monetization partners is much higher, Fybers Simon said.

Ad spend credits, revenue and CPM guarantees are much more common, Simon said. Monetization and the efficacy of the waterfall or the yield from an auction is often heavily influenced by deals that happen independently of the auction itself.

At the same time, performance-focused DSPs prefer pricing at scale in their bidding algorithms, which doesnt jibe with how first-price auctions function, Barthur said.

We are still in the process of making the programmatic pipes work for unified auctions, he said. This is primarily dependent on how demand looks at such inventory.

Up next

But despite the challenges and the gradual pace of adoption, in-app bidding is slowly becoming more the norm for app publishers, Barthur said.

Over the past couple of years, publishers have increasingly been testing a hybrid approach that combines the waterfall and unified auctions to ensure they dont lose any revenue. Publishers are also using in-app bidding to test different partners to see which ones yield the best revenue for their inventory.

The testing process is creating benchmarks that publishers can use to justify the move to more RTB-like setups.

InMobi, for example, is seeing 20% to 30% of traffic coming in through its unified auction platform.

And its scaling well, Barthur said.

Mobile header bidding ad spend grew 20% year over year, according to data released by PubMatic on Thursday. In Q2, in-app bidding rose 26% YoY, outpacing the 18% growth rate for mobile web.

Updated Aug. 21 at 6:15 p.m. to reflect the mention of Prebid Mobile.

Read the original:

In-App Bidding Gathers Steam, But Adoption Looks Nothing Like Header Bidding On The Web - AdExchanger

What Is Quantum Supremacy And Quantum Computing? (And How Excited Should We Be?) – Forbes

In 2019, Google announced with much fanfare that it had achieved quantum supremacy the point at which a quantum computer can perform a task that would be impossible for a conventional computer (or would take so long it would be entirely impractical for a conventional computer).

What Is Quantum Supremacy And Quantum Computing? (And How Excited Should We Be?)

To achieve quantum supremacy, Googles quantum computer completed a calculation in 200 seconds that Google claimed would have taken even the most powerful supercomputer 10,000 years to complete. IBM loudly protested this claim, stating that Google had massively underestimated the capacity of its supercomputers (hardly surprising since IBM also has skin in the quantum computing game). Nonetheless, Googles announcement was hailed as a significant milestone in the quantum computing journey.

But what exactly is quantum computing?

Not sure what quantum computing is? Dont worry, youre not alone. In very simple terms, quantum computers are unimaginably fast computers capable of solving seemingly unsolvable problems. If you think your smartphone makes computers from the 1980s seem painfully old fashioned, quantum computers will make our current state-of-the-art technology look like something out of the Stone Age. Thats how big a leap quantum computing represents.

Traditional computers are, at their heart, very fast versions of the simplest electronic calculators. They are only capable of processing one bit of information at a time, in the form of a binary 1 or 0. Each bit is like an on/off switch with 0 meaning "off" and 1 meaning "on." Every task you complete on a traditional computer, no matter how complex, is ultimately using millions of bits, each one representing either a 0 or a 1.

But quantum computers dont rely on bits; they use qubits. And qubits, thanks to the marvels of quantum mechanics, arent limited to being either on or off. They could be both at the same time, or exist somewhere in between. Thats because quantum computing harnesses the peculiar phenomena that take place at a sub-atomic level in particular, the ability of quantum particles to exist in multiple states at the same time (known as superposition).

This allows quantum computers to look at many different variables at the same time, which means they can crunch through more scenarios in a much shorter space of time than even the fastest computers available today.

What does this mean for our everyday lives?

Reaching quantum supremacy is clearly an important milestone, yet were still a long way from commercially available quantum computers hitting the market. Right now, current quantum computing work is limited to labs and major tech players like Google, IBM, and Microsoft.

Most technology experts, myself included, would admit we dont yet fully understand how quantum computing will transform our world we just know that it will. Its like trying to imagine how the internet or social media would transform our world before they were introduced.

Here are just some of the ways in which quantum computers could be put to good use:

Strengthening cyber security. Quantum computers could change the landscape of data security by creating virtually unbreakable encryption.

Accelerating artificial intelligence. Quantum computing could provide a massive boost to AI, since these superfast computers will prove far more effective at recognizing patterns in data.

Modeling traffic flows to improve our cities. Modeling traffic is an enormously complex process with a huge number of variables, but researchers at Volkswagen have been running quantum pilot programs to model and optimize the flow of traffic through city centers in Beijing, Barcelona, and Lisbon.

Making the weather forecast more accurate. Just about anything that involves complex modeling could be made more efficient with quantum computing. The UKs Met Office has said that it believes quantum computers offer the potential for carrying out far more advanced modeling than is currently possible today, and it is one of the avenues being explored for building next-generation forecasting systems.

Developing new medicines. Biotech startup ProteinQure has been exploring the potential of quantum computing in modeling protein, a key route in drug development. In other words, quantum computing could lead to the discovery of effective new drugs for some of the worlds biggest killers, including cancer and heart disease.

Most experts agree that truly useful quantum computing is not likely to be a feature of everyday life for some time. And even when quantum computers are commercially available, we as individuals will hardly be lining up to buy one. For most of the tasks we carry out on computers and smartphones, a traditional binary computer or smartphone will be all we need. But at an industry and society level, quantum computing could bring many exciting opportunities in the future.

Quantum computing is just one of 25 technology trends that I believe will transform our society. Read more about these key trends including plenty of real-world examples in my new book, Tech Trends in Practice: The 25 Technologies That Are Driving The 4th Industrial Revolution.

See original here:
What Is Quantum Supremacy And Quantum Computing? (And How Excited Should We Be?) - Forbes

IBM Flexes Its Quantum-Computing Muscle. Will That Translate to Its Stock? – Barron’s

Text size

The 109-year-old original tech giant IBM, it turns out, is a huge quantum computing player. It has made one of the fastest quantum computers ever assembled. But as is often the case with the weird world of quantum, investors dont know what to do with that information.

IBM (ticker: IBM) published a paper on Thursday, in conjunction with Cornell University, demonstrating that the companys quantum computers have achieved quantum volume of 64. That matches the quantum volume achieved by Honeywell (HON) earlier this year.

Thats great, but what does that mean? Its fast.

Quantum volume measures performance. Its a useful measure, Paul Smith-Goodson said in an interview. It accounts for [factors such as] error correction and noise. Goodson worked at AT&T (T) Bell Labs in the 1980s, back when physicist Richard Feynman was talking about building quantum machines. He now consults for tech consulting firm Moor Insights & Strategy.

Goodson explained that the error rate when punching, say, 10 times 10 into a calculator is about one in a billion. The error rate for a quantum computer is about one in 200. That means quantum computers have to run calculations again and have more qubits to compensate for noise. All the adjustments, noise and error correction boil down to quantum volume.

It isnt directly comparable to classical computing, because the quantum world is weird, with quantum bits, or qubits, having multiple values at the same time. But the goal is to keep increasing volume and getting faster. Honeywells goal is to improve quantum volume by 10x a year, Goodson said. Go from 64 to 640 in 2021 and 6400 in 2022.

Is the magnitude in quantum computing like Moores law for quantum computers? Not really, Goodson said. We are in the noisy phase of quantum computing, still working on error correction.

Moores law says, roughly speaking, that the number of transistors on microchips doubles every two years. Transistors store information referred to as bits. Transistors are laid down on silicon chips. Qubits can also be stored on chips.

IBM uses extreme cold to put its equipment into a quantum state. What are, essentially, quantum microchips are cooled to minus 459 degrees Fahrenheit. That is just above absolute zero, as cold as anything can get.

Our first superconducting qubit was in 2007, IBM Quantum Vice President Bob Sutor said in an interview. He is quick to emphasize that IBM has been doing quantum for a long time. On the IBM cloud now [we have] 20 quantum computers available20 machines.

Wondering how many computations have been run on IBM quantum computers since they became available? Barrons guessed 40,000. Three hundred billion, Sutor said, adding that 1.1 billion circuits ran on Aug. 7. A circuit is, essentially, quantum jargon for a computation.

IBMs quantum business is real, and for now it is essentially free. Companies, researchers or individuals can access the machines via the cloud to program and preform calculations. Down the road, quantum as a service could be come big business for IBM, Honeywell and other players. For now, IBM is creating machines and enabling use cases, Sutor explained.

IBM and Honeywell are quantum players. Google parent Alphabet (GOOGL), Amazon.com (AMZN) and Intel (INTC) are other quantum players Goodson is familiar with. Apple isnt really into this, as far as he knows.

Investors dont trade any of those stocks based on quantum yet. IBM stock, for instance, is down about 8% year to date. Honeywell shares are down about 11%. Both returns trail behind comparable gains of the S&P 500 and Dow Jones Industrial Average. Quantum gains arent enough to move the stocks yet.

In the case of Honeywella large aerospace supplierpandemic-induced air-travel declines have hurt its shares. Other tech names have performed better than IBM shares. The Nasdaq Composite, for instance, is up almost 25% and Apple (AAPL) stock touched $2 trillion in market value Wednesday.

IBM investors would surely like the company to get more credit as a tech giant with cloud computing as well as a burgeoning quantum computing business.

Thats down the road, according to both Sutor and Goodson. What will be the killer app that brings quantum into the mainstream and how will quantum change the world? Both men answered: I have no idea.

They are both experts in the field and know that new technology tends to change things in unforeseeable ways.

Write to Al Root at allen.root@dowjones.com

Read the rest here:
IBM Flexes Its Quantum-Computing Muscle. Will That Translate to Its Stock? - Barron's