Apple Enters $ 2 Trillion Club, Github’s Chinese Counterpart And More In This Week’s Top News – Analytics India Magazine

Analytics India Magazine brings to you top trending news that has happened over the past week. Lets take a look.

Source: Marriott

Following an extensive investigation the Information Commissioners Office(ICO) of the UK, has issued a notice to fine Marriott International 99,200,396 for infringements of the General Data Protection Regulation (GDPR). This fine comes as a result of a data breach putting approximately 339 million guest records at risk globally.

According to an official statement by ICO, the vulnerability began when the systems of the Starwood hotels group were compromised in 2014. Marriott subsequently acquired Starwood in 2016, but the exposure of customer information was not discovered until 2018. The ICOs investigation found that Marriott failed to undertake sufficient due diligence when it bought Starwood and should also have done more to secure its systems.

According to reports, China is making moves to localise their open source developers community. With regards to ongoing tensions with the US, China is now promoting Gitee Githubs Chinese counterpart. The Chinese government has picked Gitee to construct an independent, open-source code hosting platform for the country. This announcement comes after Githubs recent compliance to US sanctions law.

According to the website, Gitee has a community of more than 5 million developers and over 10 million repos. Now the question is whether the Chinese developers will migrate from Github to Gitee in anticipation of a conflict.

On Wednesday, Apple became the first US $2tn company. The Palo Alto tech powerhouse, had also been the first $1tn company to be valued by Wall Street back in 2018. Apple is the second company in the world to hit the trillion dollar milestone. Saudi Aramco was the first company to be valued at $2tn. Apples gains so far have been pandemic proof thanks to its brand value and consistent innovations with machine learning. Know more about Apples recent announcements at WWDC here.

On Monday, Apache MXNet and Open Neural Network Exchange (ONNX) launched the Consortium for Python Data API Standards. According to the official statement, the objective of this consortium is to tackle this fragmentation by developing API standards for arrays (a.k.a. tensors) and dataframes. For instance, an API standard can be used to specify function presence with signature, and semantics. Quansight Labs started this initiative to tackle the problem of fragmentation of data structures.

Quansight Labs is a public benefit divisionof Quansight, with a focus on the core of the PyData stack. The founding sponsors of this initiative include Intel, Microsoft, the D. E. Shaw group, Google Research and Quansight. Know more about API standards here.

Source: Intel

Intel and Accenture have joined hands to support an Intel Neuromorphic Research Community (INRC) project led by the Neuro-Biomorphic Engineering Lab at the Open University of Israel in collaboration with ALYN Hospital. With the support from Intel and Accenture, the research team from Israeli will develop a wheelchair-mounted robotic arm to assist patients with spinal injuries in performing daily tasks.

Once the algorithmic work is complete, the research team will deploy the new model on Intels neuromorphic hardware and test the capabilities of the arm. Know more about this project here.

Source: Cerebras

Cerebras Systems announced the integration of their 1.2-trillion Cerebras Wafer Scale Engine chip, known as the worlds largest chip, into the 23-petaflop Lassen supercomputer. Lawrence Livermore National Laboratory (LLNL) becomes the first institution to integrate the AI platform with a large-scale supercomputer and creates a radically new type of computing solution, enabling researchers to investigate novel approaches to predictive modeling. Know more about Cerebras WSE here.

Facebooks AI team and NYU Langone have collaborated to make MRI scanning processes faster and more accurate than ever before. Clinicians spend up to an hour gathering sufficient data for a diagnostic MRI examination, which eats into a hospitals demanding schedule.

To make the scanning process quicker for the patients, the team announced FastMRI -a major research milestone that could significantly improve the patient experience, expand access to MRIs, and potentially enable new use-cases for MRIs. This technology was carefully reviewed by expert radiologists. The AI generated MRI reports were on par with the scans generated by these traditional methods; sometimes even better.

This open research from Facebook AI and NYU Langones fastMRI initiative is a 2-year long collaborative effort to improve medical imaging technology and advance research on using AI to generate images from limited data.

Source: NVIDIA

On Wednesday, NVIDIA reported record revenue of $3.87 billion for the second quarter, which is up 26 percent from $3.08 billion in the previous quarter. This kind of growth during these testing times is a testimony to NVIDIAs state of the art iproducts and strategic collaborations. NVIDIAs technologies are driving innovation in many advanced domains such as deep learning and cloud services. They even recently broke records with their new Selene supercomputer. When it comes to gaming, NVIDIAs GeForce enabled realistic virtual worlds thanks to RTX ray tracing and AI.

Despite the pandemics impact on our professional visualisation and automotive platforms, we are well positioned to grow, as gaming, AI, cloud computing and autonomous machines drive the next industrial revolution around the world, said Jensen Huang, founder and CEO of NVIDIA

NVIDIA has also partnered with Mercedes-Benz to power its next-generation fleet of luxury cars. Earlier this year, NVIDIA had also completed its acquisition of Mellanox Technologies to offer high-speed networking in cloud data centers and to scale-out AI services.Know more here.

Google extended its Kormo Jobs app for its Indian users. Kormo is a jobs and careers app that connects job seekers to businesses that are looking to hire and allows job seekers to create and maintain a digital CV, all in one app. Kormo is currently available in Bangladesh, Indonesia, and India. Download Kormo here.

comments

I have a master's degree in Robotics and I write about machine learning advancements.email:ram.sagar@analyticsindiamag.com

Read the original:

Apple Enters $ 2 Trillion Club, Github's Chinese Counterpart And More In This Week's Top News - Analytics India Magazine

How ‘Fortnite’ and ‘Second Life’ Shaped the Future of Indian Market – Santa Fe Reporter

HighFidelityran for about three years, but when it was shut down, head dev Phillip Rosedale allowed its open source code to be used by anyone. TheVircadia app was born. It's the same engine that runs NDN World, and a completely free platform in which to build; SWAIA's only cost was that of the team that created its environmentsa dreamlikerepresentative amalgamationof Santa Fe style, the Community Convention Center and a sea of walls bearing the actual artworks from artists featured at this year's market.Embodying an avatar, users can walk right up to the pieces and get information on when they were made, by whom, what awards they might have won at Market this year and so on. Elsewhere inside the experience, you can watch videos created by the artists, including histories, testimonials about winning awards and moreand it's all sorted into an intuitive and simple design, like a video game but with myriad implications that can't even be properly conceived of yet.

Excerpt from:

How 'Fortnite' and 'Second Life' Shaped the Future of Indian Market - Santa Fe Reporter

IOTA Foundation presents the current projects in the mobility industry – Crypto News Flash

At an event for the Mobility Open Blockchain Initiative (MOBI), Mat Yarger (Head of Mobility & Automotive), Jens Munch Lund-Nielsen (Head of Global Trade & Supply Chains) and Dan Simerman (Head of Finance Relations) from the IOTA Foundation gave a presentation about IOTA and provided interesting insights.

MOBI is a member-led consortium working to make transport more environmentally friendly, efficient and affordable through the use of blockchain and related technologies. Through research, innovation platforms and working groups, MOBI works to create and promote an industry standard for the adaptation of intelligent mobility blockchain solutions.

Members of the consortium include BMW, Continental, Ford, General Motors, Honda, Hyundai, Renault, Bosch, as well as major technology companies such as Accenture and IBM, and blockchain companies such as ConsenSys, Hyperledger, the Enterprise Ethereum Alliance, Ripple and the IOTA Foundation.

Mat Yarger explained that Jaguar Land Rover is one of the most important partners of the IOTA Foundation for the automotive industry. The company has developed a car wallet for its I-Pace vehicles in 2019, which can send and receive payments for digital and physical services, and can communicate with other cars and third-party services such as parking, tolls and marketplaces.

One of our top partner is Jaguar Land Rover, they are endlessly supporting what we are doing and we are very happy working with them on a lot of different mechanisms for integrating IOTA in decentralized technologies of vehicles.

The wallet provides a standardized mechanism for performing small transactions in near real-time that are secure, have audit trails and can be verified. The technology can be used for communication with toll infrastructure, for intelligent parking or for usage-based taxes. The wallet collects data on how many kilometers and on which roads a vehicle has driven and what toll infrastructure it has passed.

As Yarger further explained, the IOTA Foundation has extended its partnership with Jaguar Land Rover through a test case with the city of Trondheim. In the city, both partners are working together within the EU Horizon 2020 grant and the CityxChange consortium. However, the Horizon 2020 project is much larger and includes 32 partners and 11 test cases, on which the IOTA Foundation is working.

At the moment it is still difficult to trace energy back to its origins, especially renewable energies. However, the ability to do so opens up business opportunities, such as carbon credit tokenization and dynamic pricing based on origin. As a result, IOTA, Trondheim and Jaguar Land Rover have developed a solution:

A joint project developed by IOTA, JLR, and Engie Labs at the energy-positive Powerhouse building in Trondheim Norway uses IOTAs immutable DLT to create a tamper-proof record of all energy transactions and sources at the building. This information is then shown in the dashboard of the I-Pace vehicle so the user can see the origin of the energy being used to charge the car.

Furthermore, EladNL has developed a plug and play solution for charging e-cars in cooperation with the IOTA Foundation. This reduces friction points as the charging cable can be simply plugged in without having to set up an account with a provider beforehand. The entire charging process up to payment is handled by the Tangle and a wallet integrated in the car.

Another project that appealed to Yarger and which is interesting for MOBI is the Alvarium project with the Linux Foundation, Dell and others. Alvarium creates an open, vendor-neutral middleware stack where multiple IOTA products act as base technologies and form the trust layer so that companies can trust the immutability of data. Yarger explained:

This project will be committed as open source code to the Linux Foundation in the vry near future and this project has been actively accelerating very rapidly. The real impact here from MOBIs perspective is to look at this in perspective of this mobility network [] it is not just vehicles, it is also infrastructure in smart cities, its multiple stakeholders [] theres a lot of interoperability thats required here. This is something that s working on scaling with multiple partners in the hardware industry and we are talking multiple testbeds.

Below is the full presentation, which also includes the supply chain and tokenization parts. The mobility industry part starts at 20:10.

Read more:

IOTA Foundation presents the current projects in the mobility industry - Crypto News Flash

Intel Owl OSINT tool automates the intel-gathering process using a single API – The Daily Swig

Time-saving utility was finessed by an IT undergrad during the Google Summer of Code

An open source intelligence (OSINT) tool that collates threat intel data from more than 80 sources is the latest security platform to emerge from the Honeynet Project.

With a single API request, Intel Owl pulls in scan results of files, IPs, and domains from enterprise-focused threat analysis tools such as YARA and Oletools, as well as external sources like VirusTotal and AbuseIPDB.

The free-to-use application helps non-specialists avoid gathering noise while speeding up their organizations threat intelligence operations, said the projects architects in a recent blog post announcing Intel Owls official release.

A beta version of Intel Owl and released in January was masterminded by Matteo Lodi, threat intelligence lead engineer at Italian security firm Certego, with support from the Honeynet Project.

Keen to upgrade a very limited web interface and add further integrations, Lodi submitted a brief to the Google Summer of Code, which pairs student developers with open source projects.

A proposal, including a prototype, was accepted from Eshaan Bansal, an IT undergraduate and open source enthusiast based in New Delhi, India.

Intel Owl seemed really interesting, matched my techstack and had a few beginner-friendly issues, Bansal tells The Daily Swig.

Intel Owl scans files, IPs, and domains from a single API

Version 1.0.0 of the project emerged a few months later sporting a revamped web interface, complete with dark mode and several new API features.

The dashboard now displays visualized, customizable results from malware and observable scans as soon as they are generated, rather than en masse when the scan finishes.

Users can also tag analyses in order to categorize and filter various scans.

A list of all available analyzers, together with their use case and supported types, can be viewed in a tabular or dendrogram tree view.

Under the hood, Intel Owls internal models can perform static analysis of various file types, as well as strings analysis and PE (Portable Exectable) signature verification. JSON Web Tokens (JWT) are used for authentication.

Bansal also integrated additional analyzers Team Cymru Hash Registry, Tranco Domain Rank, Cloudflares DNS-over-HTTPS malware checker, and YARA with McAfee public rules into the core API.

Read more of the latest open source software security news

The new Intel Owl web interface is receiving positive feedback, Bansal said. We have solved various internal architecture problems using design patterns, making it super easy to integrate analyzers with just a few lines of code.

Bansal, who is also a member of capture the flag team Abs0lut3pwn4g3, said he was proudest of hiding this complexity from the end-user and offering a public interface that is easily customizable for both users and contributors.

Intel Owl is fresh, actively maintained and leverages the most recent and trending technologies, Lodi tells The Daily Swig.

For instance, the deployment goes seamlessly thanks to Docker, he says. Kubernetes deployments are possible, and it is very easy to contribute to, thanks to the Django framework.

It is also possible to interact with it in different ways: with the revamped web interface, for a friendly user experience, or with the official python library or CLI tool for advanced users.

Lodi hailed Bansals contribution to a tool that is really unique and flexible for different use cases.

Bansal thanked Matteo for helping him to perfect his proposal, being supportive of my ideas and assisting in solving various problems regarding code duplicity and arising as the project scaled up.

The pair, who say the application can be set up within minutes with no extra configuration, have published guidance on installation, usage, and contributing to further development.

Lodi says the next release will have additional analyzers and basic support for multi-tenancy, for some of the most common authentication methods and for ElasticSearch to help more structured organizations to leverage the tool.

RECOMMENDED Mole in your network: Out-of-band exploitation framework showcased at Black Hat 2020

Originally posted here:

Intel Owl OSINT tool automates the intel-gathering process using a single API - The Daily Swig

Tiger Woods, Rory McIlroy near bottom of field at The Northern Trust – ESPN

NORTON, Mass. -- Tiger Woods and Rory McIlroy walked off the 18th green early Saturday afternoon at The Northern Trust and looked like two men happy that the day was done.

After all, they had just spent a little less than five hours wandering around all parts of TPC Boston as Woods shot a 2-over 73 and McIlroy a 3-over 74.

So what else do you do after a round that, when it was over, had them better than just one player in the 70-player field through three rounds?

To the practice range? Putting green? Nope, just go and have lunch and forget about what just happened.

Woods and McIlroy will again be among the early pairings for Sunday's final round of the first FedEx Cup playoff event, given they did very little to improve their spots on the leaderboard.

The inability to find a groove has been a constant theme for both Woods and McIlroy since golf's restart following a three-month hiatus because of the coronavirus pandemic. Woods was tied for 40th at the Memorial and tied for 37th at the PGA Championship. McIlroy has had similar issues; he has just one top-20 finish in six starts, that coming in a tie for 11th at the Travelers Championship in late June.

Both will be in the field in next week's BMW Championship, which takes the top 70 in the FedEx Cup points list. McIlroy is in good shape to continue beyond that and get into the season-ending Tour Championship; Woods has significant work ahead of him. Only the top 30 gain entry to that event; McIlroy entered this week in the eighth spot, while Woods stood 49th.

On Friday, after making the cut on the number at 3 under, Woods said he hoped to be one of the players who "goes out and tears the course apart" on Saturday. There were, after all, low numbers to be had, considering Scottie Scheffler's 59 and leader Dustin Johnson's 60 in the second round.

And Saturday's third round, which began for Woods and McIlroy at 8:30 a.m. ET in just the third group out, nearly five hours before Johnson's tee time, started with such promise. McIlroy hit his approach at the first hole to 7 feet; Woods' second settled to 4 feet.

McIlroy made his putt for an opening birdie. Woods missed from short range, setting the tone for a day in which he needed 29 putts. On Saturday, he missed six putts inside 10 feet. Over the first two rounds, he had missed just one.

Meanwhile, McIlroy's day began to fall apart at the second. His missed the fairway with his drive but still tried to reach the par-5 in two. His approach barely cleared the hazard short of the green. His pitch hit the bank in front of him, the ball ricocheting straight up in the air and then tumbling back into the water. He was forced to walk back to the drop zone 110 yards away. By the time he was done, he had a triple-bogey 8.

After hitting a solid approach to 10 feet at the par-3 third that led to a birdie, McIlroy couldn't help but laugh at his birdie-triple-birdie start as he walked off the green.

"Yeah, 3-8-2 is a great area code," he joked.

He wasn't done with big numbers, though. At the sixth, he posted another triple bogey, again set off by a wayward drive and trouble with the rough.

McIlroy wasn't alone in his problems. After shooting even-par 36 on the front, Woods carded bogeys at 11, 12 and 14. He, like McIlroy, birdied the par-5 18th to put an end to the misery.

After Woods rolled in the 6-footer, caddie Joe LaCava jokingly waved a towel in the air in mock celebration.

Read this article:

Tiger Woods, Rory McIlroy near bottom of field at The Northern Trust - ESPN

How Intel helped give the worlds first cyborg a voice – The Next Web

On a cold November day in 2016, Dr Peter Scott-Morgan was having a long, hot soak in the bath. After stepping out of the tub, he gave his foot a shake to get the water off. But his foot wouldnt move.

Peter was diagnosed with motor neurone disease (MND), the same incurable illness that killed Stephen Hawking.

The disease degenerates the nerve cells that enable us to move, speak, breathe, and swallow.In time, it can render a person physically paralyzed while their brain remains alert, locked into a body it can no longer control.Peter was given two years to live.

But Peter had a plan to beat the prognosis. He was going to become a cyborg.

Peter had a headstart in his race against the illness. He had the first PhDgrantedby a robotics faculty in the UK, a bachelors degree in computing science, and a post-graduate diploma in AI. Hed also written a book titledThe Robotics Revolution.

He used this experience to develop a vision for what he calls Peter 2.0, a cyborg who would not just stay alive, but also thrive.

Hed escape starvation by piping nutrients into his stomach, and avoid suffocation by breathing through a tube. His paralyzed face would be replaced by an avatar, and his disabled body would be wrapped in an exoskeleton standing atop a self-driving vehicle.

He also needed a new voice.

In early 2019, Peter gave a speech at aconference in London. Among the listeners was Lama Nachman, the head of Intels Anticipatory Computing Lab.

Lama had her ownexperience with MND. Her team had upgraded thecommunication system that poweredStephen Hawkings iconic computerized voice.

For Hawking, Intelattached an infra-red sensor to his glasses that detected movements from his cheek, which he used to select characters on a computer. Over time, the system learned from Hawkings diction to predict the next words hed want to use in a sentence.

As a result, Hawking only had to type under 20% of all thecharacters he needed to talk. This helped him double his speech rate anddramatically improve his ability to perform everyday tasks, such as browsing the web or opening documents.

Intel named the software the Assistive Context-Aware Toolbox (ACAT). The company laterreleased it to the public as open-source code,so developers could add new features to the system.

But Lamainitially didnt want to adapt ACAT to Peters needs.

Peter could already use gaze-tracking technology to write and control computers with his eyes. Developing a new one seemed like a waste of Intels resources.

But then we realized the original premise of ACAT, whichwas essentially an open system for innovation,was exactly what was needed, Lama told TNW.

Her team decided to use ACAT to connect all the pieces of Peters cyborg vision: the gaze-tracking, synthetic voice, animated avatar, and autonomous vehicle.

We shifted to do two threads:one was research on the responsive generation system, and the other one was essentially taking ACAT and adding gaze control support.

But Peter still needed a new voice.

Hawking had famously chosen to keep his synthetic voice. I keep it because I have not heard a voice I like better and because I have identified with it, he said in 2006.But Peter wanted to replicate the sound of his biologicalspeech.

Dr Matthew Aylett, a world-renowned expert on speech synthesis, thought he could help.

He recorded Peter saying thousands of words, which he would use to create a replica voice. Peter would then use his eye movements to control anavatar that spoke in his own voice.

Aylett had limited time to work. Peter would soon need a laryngectomy that would allow him to breathe through a tube emerging above his chest. But the operation would mean he could never speak again.

Three months before Peter was due to have surgery, the clone was ready.

Aylett gave Peter a demo of it singing a song:Pure Imagination from the 1971 film Willy Wonka & the Chocolate Factory.

Peters operation would take place in the month in which hed originally been told he was likely to die.The night before his operation, Peter tweeted a goodbye message alongside a photo with his husband.

The operation was a success. But Peter would remain mute until his communication system was ready. By this point,the exoskeleton and autonomous vehicle had been shelved, but the electronic voice and avatar were still part of the plan.

The system soon arrived. It came witha keyboard hed control by looking at an interface, and anavatarsynchronized with his speech. Peter 2.0 was ready to go.

There was another big difference between Peter and Hawkings visions for their systems. WhileHawking wanted to retain control over the AI, Peter was more concerned about the speed of communication.

Ideally, Peter would choose exactly what the system said. But the more control the AI is given, the more it can help.

A lot of the time, we think when we give people the control, its up to them what they do, said Lama. But if theyre limited in what they can do, youre really not giving them the control.

However, ceding control to the AI could come at a big human cost: it riskssacrificing a degree of Peters agency.

Over time, the system starts to move in a certain direction, because youre reinforcing that behavior over and over and over again.

One solution is training the AI to understand what Peter desires at any given moment. Ultimately, it could take temporary control when Peter wants to speed up a conversation, without making a permanent change to how it operates.

Lama aims tostrike that delicate balance in the next addition to Peters transformation: an AI that analyzes his conversations and suggests responses based on his personality.

The system could make Peter even more of a cyborg which is exactly what he wants.

Peter: The Human Cyborg, a documentary chronicling his transformation, airs on the UKs Channel 4 on August 26.

So you like our media brand Neural? You should join our Neural event track at TNW2020, where youll hear how artificial intelligenceis transforming industries and businesses.

Published August 21, 2020 19:04 UTC

See more here:

How Intel helped give the worlds first cyborg a voice - The Next Web

Open Source and Open Standards: The Recipe for Success Featured – The Fast Mode

Over the last ten years, technological advancements across the world have been remarkable, with the number of things that can be connected growing exponentially. By 2025, it is expected that more than 75 billion devices will be connected to the Internet worldwide. As the decade unfolds, demand will only increase for different types of streaming services and high bandwidth-consuming applications. Therefore, the need to support these coming applications will continue to mount.

To effectively do this, operators and vendors must focus on lowering costs for service deployment, fostering greater interoperability for deployment flexibility, and shortening service deployment times for market agility, whilst maintaining Quality of Experience (QoE). Paving the way for these new requirements is Broadband Forum, which is unifying the best of open standards and open source to deliver the agile technologies that enable the necessary network transformations and services of the future.

The rise of cloudification

Emerging technologies such as 5G and the proliferation of devices driven by the Internet of Things (IoT) have applied significant pressures to the network architecture. As a result, cloud technologies including Software Defined Networking (SDN) and Network Functions Virtualization (NFV) have become a key business consideration.

By introducing cloud concepts into the Central Office (CO), operators can make their networks more agile and scalable by improving flow control and enhancing functional flexibility. With operators well-versed in the benefits of this, it is no surprise how quickly the number of networks leveraging these technologies has grown. For example, the global telecoms cloud market is expected to grow from 9 billion dollars in 2016 to $29 billion dollars by 2021.

However, the challenges around deployment, migrating to a cloud-based CO and how new and old technologies can co-exist remain.

Overcoming the challenges

Addressing these challenges through standardization is Broadband Forums Cloud Central Office (CloudCO) initiative. This open interface is a recasting of the Central Office hosting infrastructure that utilizes NFV, SDN and cloud technologies to support network functions. The CloudCOs functionality can be accessed through a northbound API, allowing Operators, or third parties, to consume its functionality, while hiding how the functionality is achieved from the API consumer. The system acts as a foundation for an ecosystem to evolve, helping the thriving community of suppliers and service providers in their quest to embrace the cloud with all the benefits that interoperability brings.

Unifying open source and open standards in this way is key for network automation and ensuring the efficient delivery of broadband access technologies like cloud, NFV and SDN. Open standards are needed in order to align the industry on common architecture and migration approaches. Without these standards, operators would not be able to protect their existing asset investments and launch new opportunities for service development. Together with open source, standardization will enable seamless co-existence, by ensuring existing equipment can interoperate with new technologies, eliminating the need for all the equipment to be replaced.

Part of CloudCO, one open source solution which has gained significant traction is Broadband Forums Open Broadband - Broadband Access Abstraction (OB-BAA). This allows accelerated deployment of cloud-based access infrastructure and services and facilitates co-existence and migration. OB-BAA can be adapted to many software defined access models and the speed at which service providers can now deploy standardized cloud-based infrastructures has notably improved. This added functionality has enabled flexible solutions needed by SDN/NFV-based networks.

The OB-BAA project is designed to be deployed within the Forums CloudCO environment as one or more virtualized network functions (VNFs). It specifies Northbound Interfaces (NBI), Core Components and Southbound Adaptation Interfaces (SAI) for functions associated with the access network devices that have been virtualized. Building on previous releases of the OB-BAA code distributions, Broadband Forum has most recently published Release 3.0 of its Open Broadband - Broadband Access Abstraction (OB-BAA) open-source project. Release 3.0 provides capabilities to manage Simple Network Management Protocol (SNMP) based Access Nodes via the vendor's adapters, thus accelerating migration to SDN-based automation platforms. Release 3.0 aims to take OB-BAA to the next level, providing operators with the tools to monitor and enhance network performance, cost-effectively and efficiently.

Collaboration is key

The revolution of the broadband industry is now upon us - and a wide variety of different requirements have to be addressed by operators across the world. The involvement of the whole industry in the standards process is needed to ensure that the new standards developed for the world of automation are scalable and far-reaching.

Overall, OB-BAA will make it possible for operators to migrate to and manage programmable network environments, where new services can be deployed rapidly through interaction with the common abstraction of Access Nodes. Operators and equipment manufacturers will be able to reap the benefits of greater networking flexibility and be able to streamline development by implementing standard interfaces, while differentiating their service offering via stable standardized platforms.

Facilitating an agile, flexible and integrated approach, OB-BAA allows service providers to embrace the best of open source and open standards, creating a programmable broadband network which delivers on the promise of next-generation broadband, while reducing service providers costs and protecting their investments.

To learn more about Broadband Forums work on Open Broadband Software, please click here.

View post:

Open Source and Open Standards: The Recipe for Success Featured - The Fast Mode

Coding within company constraints – ComputerWeekly.com

Assuming that there is at least some level of software development occurring in every business, it needs to be run optimally, be of high quality and be as cost-efficient as possible. At the same time, the pandemic has shown that some software may be called upon to do things that go way beyond its original design goals.

There seems to be a few areas of technology that resonate among the experts Computer Weekly has spoken to about what defines modern software development. Topping the list is containerised microservices. One of the main benefits of this approach is that code can be developed, tested, deployed and run in production in a manner that limits its impact on the stability of the overall IT system.

Conways law states that any organisation that designs a system will produce a design whose structure is a copy of the organisations communications structure. According to Perry Krug, director for customer success at Couchbase, Conways law cannot be fought, but he believes that by understanding its influence, it is possible to work within its constraints.

Its influence extends to the realms of software development, which means modern software development practices need to be cognisant of the organisational structure that exists within a business.

At one level, this may seem to contradict the more collaborative working practices that define modern software development. You need to collaborate to successfully develop applications, says Krug. If new ways of working make close collaboration difficult, collaborate loosely. If shared resources are causing constraints and bottlenecks, couple software more loosely to its foundations. Trends like microservices are an explicit recognition of these constraints and should now be coming into their own.

Some industry experts believe that microservices architecture empowers software innovation.

One of these is Arup Chakrabarti, senior director of engineering at PagerDuty. Microservices enable a far greater focus on customer experience than was previously the case, he says. Isolating areas of the overall code base enables the creation of mini innovation factories across the business each operating at its own pace. With less to integrate, theres less stepping on toes.

However, in Chakrabartis experience, the use of microservices can itself lead to an explosion in complexity and all the challenges that come with that.

Bloomberg, for instance, has split some of its monolithic applications into containerised microservices that run as part of a service mesh. But this architecture has brought its own set of challenges, particularly as it can sometimes be hard for developers to fully understand what is actually going on.

Peter Wainwright, senior engineer on the developer experience (DevX) team at Bloomberg, says distributed trace can help, since it allows an engineer to see all of a services dependencies. Downstream services can also see which upstream services rely on them.

A prime example is our company-wide securities field computation service. Calculations around the universe of securities can take place in many places. Knowing where to route requests for such computations is non-trivial, so we use a sort of smart proxy that becomes a black box router for requests, he says.

Services provide data for requests without needing to know whos asking. Unfortunately, this presents an obstacle when there are performance problems. Distributed trace restores visibility into who I am calling and who is calling me that the monolith previously gave engineers implicitly.

To ensure its engineers could focus on their applications, Bloomberg has built scalable platforms using open source products like Kubernetes, Redis, Kafka and Chef. This, says Wainwright, enables developers to use turnkey infrastructure for the heavy lifting and drop in their application code.

In terms of reducing bugs, there is plenty that can be gleaned from how open source code is tested.

Successful open source projects, like the GNU Compiler Collection (GCC) that builds large parts of a Linux distribution, for decades have required running a test suite before submitting a patch for inclusion, says Gerald Pfeifer, chief technology officer (CTO) at SUSE.

He says open source projects like LibreOffice use tools such as Gerrit to track changes and tightly integrate those with automation tools such as Jenkins that conduct builds and tests.

For such integration testing to be effective long term, we need to take it seriously and neither skip it, nor ignore regressions introduced, says Pfeifer.

He believes extensive automation is a key success factor, as is enforcing policies, to ensure code quality. If your workflow involves code reviews and approvals, doing automated testing before the review process even starts is a good approach, he says.

According to Pfeifer, LibreOffice and GCC share an important trait with other successful projects focusing on quality. Whenever a bug is fixed, a new test is added to their ever-growing and evolving regression test suites, he says. That ensures the net which is cast constantly becomes better at catching and hence avoiding, not only old issues creeping in again, but also new ones from permeating. The same should apply to new features, though when the same developers contribute both new code and test coverage for those, there tends to be a risk of having a blind eye.

Describing how its service mesh architecture is tested, Bloombergs Wainwright says: Instead of bundling changes on a fixed schedule, better testing enables us to make changes more rapidly. Changes are smaller, so theres less that can go wrong and mitigation is simpler.

While code quality is often measured in terms of defect rates, error budgets and the like, Wainwright belies the real benefit of easier testing is psychological. Teams confident that their tools have their back will make more progress in a shorter time, he says. That means we deliver more value to our customers faster.

However, as PagerDutys Chakrabarti points out, one of the biggest changes of recent years is customers intolerance for anything less than perfection when it comes to performance. Many believe engineers have their own hundred millisecond rule to contend with. Come back later is no longer an acceptable response, he says.

According to Chakrabarti, the idea of engineering web-scale applications is pretty much taken as given these days, particularly in the wake of the coronavirus, which has seen companies scrambling to support new ways of working. Data from PagerDuty shows that new code and heavier volumes of traffic have resulted in more incidents as much as 11 times more in some sectors.

As an industry, weve got better at fixing things without affecting customer experience, including through an automated approach to digital operations management. Its still early days for auto-remediation, but we are starting to hand over more control to technology. With time and further advances in machine learning, we ought to be able to teach some of our systems to self-heal based on past events even in circumstances that only occur once in a blue moon, says Chakrabarti.

Beyond the technical aspects of developing bug-free code, built in a modern way, such as the self-contained IT architectures and microservices approach discussed earlier, the coronavirus pandemic has made team communications a top priority.

In a socially distanced world, product managers are more important than ever. Developers will know what needs to be done to create a solution if they know what theyre supposed to solve, says Couchbases Krug. However, solving these issues means getting inside the customers head. Expecting developers to become psychologists is a step too far. Instead, there needs to be clear communication between product managers and developers over what customers issues are and ideally what they want their end state to be.

This means fast communication with distributed teams is essential, adds Krug. If it isnt in place, a priority for all teams in a business should be making sure it is.

Follow this link:

Coding within company constraints - ComputerWeekly.com

AWS Controllers for Kubernetes Will Be A ‘Boon For Developers’ – CRN: Technology news for channel partners and solution providers

A new Amazon Web Services tool that allows users to manage AWS cloud services directly within Kubernetes should be a boon for developers, according to one AWS partner.

AWS Controllers for Kubernetes (ACK) is designed to make it easier to build scalable and highly available Kubernetes applications that use AWS services without the hassle of defining resources outside a cluster or running supporting services such as databases, message queues or object stores within a cluster.

The AWS-built, open source project is now in developer preview on GitHub, which means the end-user-facing installation mechanisms arent yet in place. ACK currently supports Amazon S3, AWS API Gateway V2, Amazon SNS, Amazon SQS, Amazon DynamoDB and Amazon ECR.

Our goal with ACK (is) to provide a consistent Kubernetes interface for AWS, regardless of the AWS service API, according to a blog post by AWS principal open source engineer Jay Pipes, Michael Hausenblas, a product developer advocate for the AWS container service team, and Amazon EKS senior project manager Nathan Taber.

ACK got its start in 2018 when Chris Hein, then an AWS partner solutions architect, debuted AWS Service Operator (ASO) as an experimental project. Feedback prompted AWS to relaunch it last August as a first-tier, open-source software project, and AWS renamed ASO as ACK last month.

The tenets we put forward are ACK is a community-driven project based on a governance model defining roles and responsibilities; ACK is optimized for production usage with full test coverage, including performance and scalability test suites; (and) ACK strives to be the only code base exposing AWS services via a Kubernetes operator, the blog post states.

ACK continues the spirit of the original ASO, but with two updates in addition to now being an official project built and maintained by AWS Kubernetes team. AWS cloud resources now are managed directly through AWS APIs instead of CloudFormation, allowing Kubernetes to be the single source of truth for a resources desired state, according to the blog post. And code for the controllers and custom resource definitions is generated automatically from the AWS Go SDK, with human editing and approval.

This allows us to support more services with less manual work and keep the project up-to-date with the latest innovations, the AWS blog post stated.

ACK is a collection of Kubernetes Custom Resource Definitions and Kubernetes custom controllers that work together to extend the Kubernetes API and create AWS resources on behalf of a users cluster, according to AWS. Each controller manages customs resources representing API resources of a single AWS service.

Kubernetes users can install a controller for an AWS service and then create, update, read and delete AWS resources using the Kubernetes API in lieu of logging into the AWS console or using AWS Command Line Interface to interact with the AWS service API.

This means they can use the Kubernetes API to fully describe both their containerized applications, using Kubernetes resources like Deployment and Service, as well as any AWS managed services upon which those applications depend, AWS said.

AWS plans to add ACK support for Amazon Relational Database Service and Amazon ElastiCache, and possibly Amazon Elastic Kubernetes Service (EKS) and Amazon Managed Streaming for Apache Kafka.

The cloud provider, which is seeking developer input on the expected behavior of destructive operations in ACK and whether it should be able to adopt AWS resources, also is working on enabling cross-account resource management and native application secrets integration.

AWS Partner Reaction

ACK is a strategic move for AWS, especially as it competes with other Kubernetes offerings from competitors including Google Cloud, which already offers native integration from its Google Kubernetes Engine (GKE) to its cloud services such as Spanner, BigQuery and others, according to Bruno Andrade, a cofounder and CEO of AWS partner Shipa, a Santa Clara, Calif. startup that launched this year and directly integrates into AWS Kubernetes offering and its services.

We believe ACK makes total sense, especially for users that are looking at building a true cloud-native application, where there is native integration to cloud services for their application directly from their clusters, which can reduce drastically the time to launch applications or roll out updates, said Andrade, whose company allows teams to easily deploy and operate applications without having to learn, write and maintain a single Kubernetes object or YAML file.

ACK and GKE connector are focused on services running within their clusters and clouds, Andrade said, so one thing that still (needs) to be fully addressed are cases when customers have clusters running across multiple clouds and on-premises, and how the workloads running across these clusters will properly connect across the cloud-native services offered by the different services.

When using Kubernetes clusters in production, workloads typically need to integrate with other cloud services and resources to deliver their intended solutions, said Kevin McMahon, executive director of cloud enablement at digital technology consultancy SPR, an AWS Advanced Consulting Partner based in Chicago.

Integrating with the cloud services provided by vendors like AWS requires custom controllers and resource definitions to be created, he said. AWS Controllers for Kubernetes makes it easier to enhance Kubernetes workloads using AWS cloud services by providing vendor-managed, standardized integration points for companies relying on Kubernetes. Now companies looking to use Kubernetes can completely describe their applications and the AWS managed services that those applications rely on in one standard format.

With ACK, AWS continues to simplify the deployment and configuration of its services by integrating natively with Kubernetes, said Alban Bramble, director of public cloud services at Ensono, AWS Advanced Consulting Partner and managed services provider with its headquarters in Downers Grove, Ill.

This change will be a boon for developers looking to speed up releases and manage all resources from a single deployment, Bramble said.

But one area of possible concern, according to Bramble, is this could negatively impact policies already in place by SecOps teams, resulting in resources being deployed without their knowledge, thereby reducing their ability to effectively monitor and secure the services running in the environment.

Careful consideration and planning needs to take place between those two groups in order to ensure that processes are in place that dont stifle the developers ability to work within agile release cycles, while also accounting for the governance and security policies already in place, he said.

More:

AWS Controllers for Kubernetes Will Be A 'Boon For Developers' - CRN: Technology news for channel partners and solution providers

Telegram launches one-on-one video calls on iOS and Android – The Verge

Secure messaging app Telegram has launched an alpha version of one-on-one video calls on both its Android and iOS apps, the company announced, saying 2020 had highlighted the need for face-to-face communication.

In a blog post marking its seventh anniversary, Telegram described the process for starting a video call: tap the profile page of the person you want to connect with. Users are able to switch video on or off at any time during a call, and the video calls support picture-in-picture mode, so users can continue scrolling through the app if that call gets boring. Video calls will have end-to-end-encryption, Telegrams blog posts states, one of the apps defining features for its audio calls and texting.

Our apps for Android and iOS have reproducible builds, so anyone can verify encryption and confirm that their app uses the exact same open source code that we publish with each update, according to the post.

In April, Telegram announced it would launch group video calls later this year. This isnt quite that, but in the most recent blog post, the company indicated that video calls will receive more features and improvements in future versions, as we work toward launching group video calls in the coming months.

Telegram said in April it had reached 400 million monthly active users.

Go here to see the original:

Telegram launches one-on-one video calls on iOS and Android - The Verge