Bitcoin (BTC) has come to an end, sell everything. – IdahoReporter.com

Eleven years since Mr. Nakamoto created Bitcoin we are witnessing the history. The crypto as we know it is dying. Slowly but surely. No, BTC price will not recuperate from here. These are probably the last above $10K weeks.

Back in January 2009 Satoshi Nakamoto released an open-source code which marks the start of Bitcoin life. Nakamoto mined the starting block of the chain, known as the genesis block.

But you know the rest of the story, ups and down and more ups and downs. Millions invested in crypto currencies and price for one Bitcoin peaked at $20K in 2017and now we are here. BTC investors thought that things are heating up again and that we will see another $20K price tag this year, but everything went backwards.

BTC shed $1,000 off of its value in less than a month. Even though the BTC chart shows that this is actually just a small correction I am pretty sure that BTC has come to a brick wall. In my opinion the main reason for this is the huge rally we saw at Nasdaq and NYSE.

On a year-to-date basis BTC gained a measly 30% reward for its long term holders. If you are now laughing at me and asking yourself how could 30% be low, I will explain this now. Being preached as an alternative to fiat currencies BTC failed to prove that during the COVID-19 outbreak in March . Dragged down by a fear BTC went below $5K mark and gradually recovered from there, but it did so because all markets recovered.

NASDAQ, NYSE and Gold, they all recovered from the huge sell-off in March. Gold even went all-time-high on us. And pretty much all markets offered similar rewards to investors. From March bottom to the August peaks all numbers went up almost 100%. So, why invest in crypto when you can invest in real companies with tangible products and SaaS companies with huge profits? This is what is pushing investors away from crypto.

And not to talk about a wave of SPACs (even crypto big boys now are using this scheme to enter NASDAQ)where investors saw some massive (over 300%) gains in a matter of days/weeks. And not to talk about new and emerging markets such as online betting apps (LCA, DKNG) or hydrogen future (NKLA,SHLL) and finally Apple of China (Xiaomi) or Tesla of China (Nio and Xpev). These companies still offer a tremendous growth opportunity. While these are hand-picked stocks there are still hundreds of other companies with 100%+ upside from here.

Blockchain tech will stay with us but the golden days of crypto coins are finished. The sooner you realize that the better for you.

I ask you now. Why invest in shit(alt)-coins when you can get better ROI on technologies of the future? Seems like many investors are asking the same question.

Excerpt from:

Bitcoin (BTC) has come to an end, sell everything. - IdahoReporter.com

A hundred academics demand more transparency from the Government with the Radar Covid app | Technology – Explica

More than 110 renowned Spanish academics, most of them technology experts, published a manifesto this Saturday calling for more transparency from the Government in the development of such sensitive software as Radar Covid, the public app for notification of exposures. In the text, they ask that the promised publication of the app code be exhaustive, well documented and encompass all stages of the apps development, from its inception to future changes. Throughout its almost three pages, the signatories applaud the innovative milestone for Spanish public health of this tool, but regret that the Secretary of State for Digitization and Artificial Intelligence, the head of the application, to date there is no published any documentation on the design of Radar Covid, on its implementation or on the integration process of the Autonomous Communities .

The Secretary of State, after constant criticism for not fulfilling the commitment to bring open source to light, has promised to publish the open source code next Wednesday. But it is not yet clear how deep and constant the governments gesture will be: The opening of the code must be accompanied by complete documentation and information, so that the scientific community and civil society have the necessary scrutiny capacity to identify points to improve and contribute to developing and deploying Covid Radar according to the highest standards , the manifesto indicates.

Among the signers of the manifesto are Daniel Innenarity, Professor of Political and Social Philosophy; Carme Torras, professor at the Robotics Institute of the CSIC and member of the National Council of Artificial Intelligence of the Government; Itziar de Lecuona, Unesco Professor of Bioethics at the University of Barcelona and member of the multidisciplinary working group of the Ministry of Science; Carmela Troncoso, promoter of the DP-3T protocol, who uses the Radar Covid app, and recently named by Fortune magazine as one of the most promising figures under 40 years old; Ricardo Baeza-Yates, professor of Data Sciences and member of the National Council of Artificial Intelligence of the Government; Miguel Luengo-Oroz, head of data for the United Nations Global Pulse; Maribel Gonzlez Vasco, professor of Applied Mathematics at the Rey Juan Carlos University; Lorenzo Cotino, Professor of Constitutional Law at the University of Valencia; Josep-Domingo Ferrer, Unesco Chair in Data Privacy; Juan Tapiador, professor of Computer Science at the Carlos III University, or Jos Molina Molina, president of the Transparency Council of the Region of Murcia.

To questions from EL PAS, sources from the Secretary of State insist on their commitment to publish the code on September 9: We will comply with the commitment to publish on the day and faster than expected. It is something unprecedented in the Spanish public administration and an exercise in transparency , they say, adding: Lets hope that when the code is released, whoever looks at it, fiddles around and helps to verify and improve the tool.

The manifesto praises the achievement of the Spanish Administration in launching an app like Radar Covid, but a tool with such penetration (more than 3.4 million downloads), so sensitive and that should generate trust, needs an exemplary and flawless process and to serve as a precedent for future software developments: There is no technology without flaws and therefore multidisciplinary scrutiny is necessary to achieve the best result, they say in the text. Only an open and joint work, they continue, can efficiently identify potential biases and errors in the conceptualization and implementation of the application that may lead to undesired effects in terms of discrimination and violation of rights. Nothing in the text implies that there are errors or problems with the app, but the only way to know is with public scrutiny. To make it possible, and after waiting for weeks for the insides of the application to be known, they establish a series of essential elements that the Government must publish.

One of the most relevant points is to know the code that allows analyzing all the elements of the tracking system, including the servers, governance and the app itself, which has already been downloaded by more than 3.4 million Spaniards. Where are they, who manages them and what security measures have been adopted both for the deployment at the national level and relative to the autonomous communities, ask the academics along with the evolution of the code since the beginning of the initiative. The revision of previous versions is necessary because not all users periodically update their mobiles, they add.

The transparency required to release Radar Covid does not only respond to technical aspects. Building the application in a certain way depends on another series of decisions, such as the adoption of the decentralized communication protocol in order to preserve the anonymity of the users. For this reason, they understand that it is vital to have the system design report: detailing the analyzes that have led to deciding the configuration parameters and use of the Google and Apple notification exposure API, the implemented mechanisms and the libraries and services used to evaluate the security and privacy of the data, as well as the evaluation of the inclusion and accessibility of the design .

Privacy has aroused a certain suspicion among society. The Government and numerous experts have defended that Radar Covid respects it at all times. The use of bluetooth and built-in protocols, such as the generation of random alphanumeric codes that track phones against each other, prevent individual identification. To verify this, the signatories want a detailed report that contains, as required, the application monitoring mechanisms and associated mechanisms to ensure privacy and compliance with data protection regulations, referring to the data collected both during the pilot as in the production phase .

With the intention of settling any doubts and democratizing a process as novel in Spain as the construction of a useful app against a pandemic, they also require an impact assessment on data protection based on the design report and associated risk analysis to the application , as well as identifying the responsibilities and role played in the project by private entities.

In the absence of the Secretary of State releasing the code, the manifesto recalls that Radar Covid is simply a complementary measure. It does not replace manual trackers or exclude the need to maintain a safety distance or use masks. In order to guarantee the impact of the application, it is necessary to adopt legal and budgetary measures of social support that allow users to follow the recommendations of the app without suffering economic, labor or social damage, the academics say.

Under the idea of tackling the health emergency on all fronts, the signatories go beyond the technological issue. In his opinion, to all the effort made must be added a supervision that identifies potential discriminatory abuses in areas such as housing, the labor market and education. Only a joint interdisciplinary effort and with civil society can efficiently identify potential biases and errors in the conceptualization and implementation of the application that can lead to undesired effects, they reason.

You can follow EL PAS TECNOLOGA RETINA on Facebook, Twitter, Instagram or subscribe here to our Newsletter.

Information about the coronavirus

Here you can follow the last hour on the evolution of the pandemic

This is how the coronavirus curve evolves in Spain and in each autonomy

Download the tracking application for Spain

Search engine: The new normal by municipalities

Guide to action against the disease

View original post here:

A hundred academics demand more transparency from the Government with the Radar Covid app | Technology - Explica

What’s the point: Red Hat Marketplace, JDK version control, and Visual Studio Codespaces DEVCLASS – DevClass

After keeping its business under its hat for a couple of months, the IBM acquisition and venerable open source software purveyor has now made its open cloud marketplace, Red Hat Marketplace, generally available. The service is operated by parent company Big Blue and is supposed to offer one curated repository of tools and services for hybrid cloud computing.

In Red Hats case, the latter is another way of saying OpenShift, since the service really provides a variety of charged software certified to run on the companys container application platform. At the time of writing, the Marketplace contains 62 products from categories ranging from security and monitoring, to logging, tracing, and machine learning.

Organisations who are interested in a more bespoke offer can choose to set up a private marketplace with Red Hat Marketplace Select. Those can then be made to only include pre-approved services, giving admins a way of creating a sort of self-service portal for development teams with options to track usage and spending across cloud environments.

Developers looking to help move the OpenJDK forward, no longer need to learn about version control system Mercurial to participate in the project. The transition of the Java implementations jdk/jdk and jdk/sandbox repositories to Git, GitHub, and Skara was completed last weekend, with a getting started guide available for those who need help to get going again.

Users working with the JDK Updates project need to be aware that the associated repositories still use Mercurial, so a quick glance at the Wiki might be helpful. To make things not too complicated, the Skara CLI tooling is promised to be backward compatible with Mercurial, and help is meant to be available via the skara-dev mailing list or IRC.

Microsoft is ending its Visual Studio Codespaces experiment and looks to consolidate the in-browser IDE formerly known as Visual Studio Online with GitHub Codespaces. VS Codespaces will be retired in February 2021, though current users still can create new plans and codespaces until 20 November.

Self-hosting, which some organisations saw as a major selling point of VSC, isnt in the cards for GitHub Codespaces, and neither is a way to migrate codespaces set up with the VS flavour to GitHub, meaning that they have to be recreated from scratch. However, GitHub Codespaces is still in limited public beta, which means VSC users might have to wait a while until they are added to the club and are really able to access the alternative offering anyway.

The move to axe the Visual Studio product is the result of confusion amongst users, who found the distinct experiences tricky to handle. Since GitHub also belongs to the Microsoft family, the merger will help save resources that potentially could be used to address customer woes quicker.

Visual Studio Code has gotten a new extension: in-memory data store Azure Cache for Redis is now available as a preview. The addition can be found in the Visual Studio Code Marketplace or via the extension tab and is useful to view, test, and debug caches.

More:

What's the point: Red Hat Marketplace, JDK version control, and Visual Studio Codespaces DEVCLASS - DevClass

12-Year-Old Figures Out Netflix Lock Code With This "Genius" Trick – NDTV

A 12-year-old figured out the parental lock on Netflix (Representative Image)

Irish-Canadian author Ed O'Loughlin was left "both frightened and impressed" with the way his youngest daughter figured out a way to hack their Netflix parental code.

To allow parents a degree of control over what their children are watching, the streaming giant gives parents the option of using a PIN code to lock certain content. It turns out that Mr O'Loughlin's 12-year-old really wanted to watch 'The Umbrella Academy' on Netflix - but instead of asking her parents for permission, she simply devised a "genius" way to guess their lock code.

Her father took to Twitter on Sunday to explain how she did it using just a bit of grease and some clever thinking. "My youngest hacked our Netflix parental code. She put light grease on the remote and got me to input the code when she wasn't looking. Then she noted the numbers I'd pressed and went through the combinations later," wrote Ed O'Loughlin, adding that her trick left him impressed as well as frightened.

In a follow-up tweet, he explained that his daughter, aged 12, went into all the trouble to watch 'The Umbrella Academy'.

Mr O'Loughlin's tweet has gone viral with over 3.5 lakh 'likes' and more than 32,000 'retweets'.

Many in the comments section shared tales of their own children tricking them, while others said that the 12-year-old's guessing trick was "genius".

"Cut from the same cloth as my devious youngest daughter. She handed me the controls when I sat with my back to the window at night so she could see the reflection. I think she was about 7 at the time. Now she's 16 I sleep with one eye open," wrote one Twitter user.

"Very, very impressive. My son ran two school planners - one with all of his good comments and one with his bad ones," said another.

"The child is a genius," a Twitter user remarked.

In May this year, Netflix itself was left impressed with the way a woman managed to use her ex-boyfriend's account secretly.

View original post here:

12-Year-Old Figures Out Netflix Lock Code With This "Genius" Trick - NDTV

Why use of Open Source must be accounted for in Business Continuity Planning of a business – Lexology

Today, use of Open Source Software is a norm to develop Commercial Software by the businesses. A critical aspect at stake for such businesses remains to ensure business continuity at all cost.

Could use of Open Source Software hamper the business continuity? We investigated a landmark security event in the history of Open Source Software to vet this question.

The Comfort or peril of using Open Source Software

Open Source Software provides unique building blocks for Software development in todays Age. The time, effort and investment, that would have been required to build similar Software from scratch using a dedicated team in-house, is unimaginable! Most companies today understand the trade-off very clearly and therefore allow theuseof Open Source Software in their Projects while greenfield development is kept focused where the core Intellectual Property of such companies lie.

Companies often frame detailed Policies to regulate the use of Open Source Software, which we will discuss in further details in later blog posts, from our experience of creating Open Source Policies for different types of Companies.

So, there is an evident Comfort in using Open Source, yet such use of Open Source is not without its own peril.

Remember Heartbleed!

Heartbleed was detected as a serious security vulnerability on1st April, 2014which led to many commentators gasp and note that this could be the worst kind of vulnerability seen right from the beginning of Internet.

Just to refresh the memory, it is suspected that while updating OpenSSL withHeartbeatextension, the vulnerability might have crept into the Source code due to a possible oversight of coding error of the developers as far back as 2012. Though often contested, it is also suspected that some busybodies might have exploited the vulnerability at least five months prior to its eventual detection and disclosure to the public.

Codenomicon, now acquired by Synopsys, maintains a page athttps://heartbleed.com/, which works as a constant remembrance of the vulnerability that had massive effect on IT Infrastructure maintainers across the Globe.

But then we ask, what happens when such a vulnerability gets detected and worse even, when a threat actor exploits such a vulnerability.

The typical response of the businesses range from

immediately stopping the access to such a Software completely and introducing a patch to rectify such a vulnerability before re-initiating the access, ...to negotiating with threat actor for recovery of lost data while tackling questions from Privacy regulators as well as the data owners in parallel.

Amid, all of this, theBusiness Continuitygets lost and further raising serious questions on the Companys readiness to ensure business continuity.

What could cause such a peril?

While facing Heartbleed, a pertinent question got asked- was it avoidable?

Through Steve Marquess, the then CEO of the OpenSSL Software Foundationsblogpost, we get to see a rare view of the inner workings of such a popular Open Source Project. Steve says, that most of the revenue collected by OpenSSL Software Foundation comes from two streams: Donations and Commercial Contract for support.

Though the project had been very popular even before Heartbleed happened, it is evident thatthe flow of money into the project had been scarce, when compared to what would be required to maintain a project of such scale. Also, it seems developers, who were mostly working part-time on the project, were driven by their passion to create, above everything else and even though highly capable, an oversight could not be ruled out.

Post Heartbleed, when the issue came to fore, financial support got extended from different quarters.

Even at the peril of hindsight bias, it could be argued that even though the incident was not entirely unavoidable or at least for the businesses that got affected, unpredictable, at least from the clear identifier that such an important project had such an informal approach towards maintenance.

What should bother most businesses is the propensity of such an error due to lack of support to most Open Source Projects, at least in their initial stages and without formal backers, is more of a rule rather than exception.

So, what do you do?

Review.The first step starts at determining the Open Source Software that matters to your business. While advising our Clients, this is the most common response that we receive- our Codes have grown organically, we arent able to distinguish Open Source Software from the codes written in-house.

Revise.Check and rectify whether there are Open Source Software with known identifiers that can cause you trouble. If need arises, plan to move towards mature and robust Open Source Software, if you intend to not move to developing inhouse, and better, contribute to the Community through Donations, Commercial support or Vulnerability rectifications. It is always a great practice to remain invested in the Community that supports your business.

Prepare to Respond.Finding a vulnerability is inevitable. The strength of business is a function of how the business responds to a crisis. So, define a response plan in case aHeartbleedstrikes your business, including who would respond to it, how would you arrest it, how would you communicate to your Clients, Partners, Regulators etc.

Remember, your comfort in using Open Source Software must not affect yourBusiness Continuity.

Go here to see the original:

Why use of Open Source must be accounted for in Business Continuity Planning of a business - Lexology

Consumer IoT is broken and our stupid optimism is to blame – Stacey on IoT

Some of us have a closet full of connected devices that at one time represented the cutting edge of smart home tech devices like the Revolv hub, the Lighthouse camera, the Jibo robot, the Petnet feeder, and the original Sonos speakers. And while most of us were able to shrug off the end of Juicero and the related loss of $400 as the inevitable cost of living the early adopter lifestyle, the perception of expensive, short-lived gadgets haunts consumer IoT.

Every time these stories hit the tech press or the mainstream media, a much larger group of people congratulate themselves for not buying into the latest hype around connected devices and smart homes. They recognize that this tech is new, unproven, and likely not as convenient or necessary as its creators claim. Which is why even more than the lack of standards that we in the smart home world constantly bemoan the lack of faith in the life of a connected product hurts the IoT. After all, if you cant convince someone to buy a connected product in the first place, theyll never reach the point where theyll want it to interoperate with other devices.

So whats the industry to do? The common demand after a product fails and the companies that make them tell their customers theyre turning off the servers (if they do, in fact, tell customers that) is to open-source the device code so the tech-savvy early adopters can keep the device operational. But while this sounds great, its akin to cryonically freezing your head in the hopes of coming back to life after death.

Getting the code running on a connected device without having access to the backend cloud code and the application code may allow the device to run, but the overall experience will suffer. The device may work, but it wont have a good interface (or youll have to build and maintain it) and it wont have a cloud component for remote access and other functionality (unless you build and maintain it). Just like your newly thawed head will need a body, the open-source device code needs someone to build and maintain a cloud backend and a mobile application.

There are third-party companies such asDigital Dream Labs, which raised funds to take over the support and development of the Anki Vector robot, that are attempting to take device code and build infrastructure around it. But doing so requires expertise, time, and money.Are customers willing to pay someone a second time just to keep their devices running?

So when asking a company to open up its source code for a connected device, ask yourself if your frozen brain is successfully revived, if that would be enough for you.

Such partial resurrection was the plan all the way back in 2015 when we starting seriously discussing what happens when connected products fail. Some companies put code in escrow so people could have it later and maintain it. In the meantime, I encouraged venture firms, entrepreneurs, and development shops to think about failure and how their product might gracefully degrade to give the consumer time to come to terms with the loss.

Now I realize that keeping code in escrow and thinking about failure are only part of the solution. Granted, because people crazy enough to build a connected pet feeder are often stupidly optimistic, its still good advice. Please do think about what happens if your business fails so you can build a decent experience for the end consumer. Also, set milestones that indicate failure so you can warn your customers in time and perhaps allocate some of the diminished cash on hand to ensuring a graceful shutdown.

By far the greatest challenge for connected device companies is the ongoing cost of operating those devices. An investor in Mellow, the maker of a connected sous vide machine that recently told customers they needed to pay a subscription fee or see certain features vanish, explained the issue. He told me that the company has roughly $4,000 in monthly costs associated with the device and those costs go up the more people use it. And that doesnt include the ongoing costs associated with having a developer update the Android and iOS apps.

Some of those monthly bills might be lowered by choosing a different cloud architecture or security platform, but every connected device company has to account for ongoing maintenance costs. And the more features a company adds and the more customers it has, the higher those costs tend to go. At an event I hosted in August, Matt Van Horn, CEO of June Life, said that his companys cloud bill continues to rise, and he doesnt have the resources or cloud infrastructure that Amazon or Google do.

So one option might be to only buy gadgets made by those companies, since I doubt AWS is going to shut Alexa down if that business unit stops paying its cloud bills. But thats a really limiting option for consumers and for innovation in the sector overall. Nate Williams, a former employee at August and now an investor at Union Labs Ventures, says he thinks some kind of model built around an independent organization that companies pay into, and that will operate and support a device and the supporting server code going forward, might help.

He initially likened it to a homeowners association for smart home devices, but given the negative connotations around HOAs then clarified that he was seeking a sense of shared responsibility as opposed to something punitive. But I think having a little enforcement might actually be good. We could see companies pay into an organization that ensures a product has a year or 6 months of cloud and developer costs in escrow to at least ensure a failed company can keep a product running for a little while longer after giving customers notice that it will die.

That organization should also have some sort of provision for getting the remaining stock of a defunct product off the shelves. Indeed, Id love to see retailers like Best Buy or Amazon get involved. Kickstarter or Indiegogo might also be good members of such an organization to add a little more credibility to the products launched on their platform.

This sort of upfront cash that would be held in escrow to cover six months of cloud and developer costs would be a burden for smaller startups or folks trying to build something in their garage. It would be great to see scholarships or other models arise that could pay those costs for a company that cant otherwise afford it. It could be kind of like a pension plan for IoT devices.

This may not be the right solution, but failed consumer IoT devices or abrupt changes in the business model for connected devices are a very real problem that holds back adoption. Id love to see us set aside optimism so we could focus on what to do if the companies behind these products fail.

Related

Continued here:

Consumer IoT is broken and our stupid optimism is to blame - Stacey on IoT

How a new open-source tool can help businesses in the fight against malware – TEISS

New software from BlackHat makes reverse-engineering malware faster and easier for software engineers.

Reverse-engineering of malware is an extremely time- and labour-intensive process, which can involve hours of disassembling and sometimes deconstructing a software programme. The BlackBerry Research and Intelligence team initially developed this open-source tool for internal use, and is now making it available to the malware reverse-engineering community.

PE Tree is developed in Python and supports Windows, Linux and Mac operating systems. It can be installed and run as either a standalone application. Aimed at the reverse engineering community, PE Tree also integrates with HexRays IDA Pro decompiler to allow for easy navigation of PE structures, as well as dumping in-memory PE files and performing import reconstruction.

Image credit: Tom Bonner, Distinguished Threat Researcher, BlackBerry

The cyber-security threat landscape continues to evolve and cyber-attacks are getting more sophisticated with potential to cause greater damage, said Eric Milam, Vice President of Research Operations, BlackBerry. As cyber-criminals up their game, the cyber-security community needs new tools in their arsenal to defend and protect organisations and people. Weve created this solution to help the cyber-security community in this fight, where there are now more than one billion pieces of malware with that number continuing to grow by upwards of 100 million pieces each year.

PE Tree enables reverse-engineers to view Portable Executable (PE) files in a tree-view, using pefile and PyQt5, thereby lowering the bar for dumping and reconstructing malware from memory while providing an open-source PE viewer code-base that the community can build upon. The tool also integrates with Hex-Rays IDA Pro decompiler to allow for easy navigation of PE structures, as well as dumping in-memory PE files and performing import reconstruction which are critical in the fight to identify and stop various strains of malware.

To learn more and to access the PE Tree source code, please visit theBlackBerry GitHub account.

To read more, please visit the blog post here.

by Tom Bonner, Distinguished Threat Researcher, BlackBerry

Read the rest here:

How a new open-source tool can help businesses in the fight against malware - TEISS

Google Cloud launches its Business Application Platform based on Apigee and AppSheet – TechCrunch

Unlike some of its competitors, Google Cloud has recently started emphasizing how its large lineup of different services can be combined to solve common business problems. Instead of trying to sell individual services, Google is focusing on solutions and the latest effort here is what it calls its Business Application Platform, which combines the API management capabilities of Apigee with the no-code application development platform of AppSheet, which Google acquired earlier this year.

As part of this process, Google is also launching a number of new features for both services today. The company is launching the beta of a new API Gateway, built on top of the open-source Envoy project, for example. This is a fully managed service that is meant to make it easier for developers to secure and manage their API across Googles cloud computing services and serverless offerings like Cloud Functions and Cloud Run. The new gateway, which has been in alpha for a while now, offers all the standard features youd expect, including authentication, key validation and rate limiting.

As for its low-code service AppSheet, the Google Cloud team is now making it easier to bring in data from third-party applications thanks to the general availability to Apigee as a data source for the service. AppSheet already supported standard sources like MySQL, Salesforce and G Suite, but this new feature adds a lot of flexibility to the service.

With more data comes more complexity, so AppSheet is also launching new tools for automating processes inside the service today, thanks to the early access launch of AppSheet Automation. Like the rest of AppSheet, the promise here is that developers wont have to write any code. Instead, AppSheet Automation provides a visual interface, that, according to Google, provides contextual suggestions based on natural language inputs.

We are confident the new category of business application platforms will help empower both technical and line of business developers with the core ability to create and extend applications, build and automate workflows, and connect and modernize applications, Google notes in todays announcement. And indeed, this looks like a smart way to combine the no-code environment of AppSheet with the power of Apigee .

Continued here:

Google Cloud launches its Business Application Platform based on Apigee and AppSheet - TechCrunch

Perforce Launches Virtual Event on the Future of Intelligent and Data-Driven DevOps – Southernminn.com

MINNEAPOLIS, Sept. 8, 2020 /PRNewswire/ --Perforce Software, a provider of solutions to enterprise teams requiring productivity, visibility, and scale along the development lifecycle, today announced the launch of DevOps Next, a virtual conference by and for DevOps industry experts. The half-day of sessions will examine AI and ML's impact on DevOps productivity, coding, testing, and more.

"DevOps has matured significantly. But, we've reached a point where traditional tools often fall short when it comes to large amounts of data," says Perfecto Chief Evangelist and Product Manager Eran Kinsbruner.

"AI and ML offer a wide range of abilities throughout the entire software development lifecycle. DevOps Next will explore how these technologies can enable us to make more data-driven decisions, automate more processes, and deliver higher quality software faster."

DevOps Next is an entirely virtual and free event that will take place on Wednesday, September 30. The event is ideal for practitioners, execs, management, as well as other industry professionals that work within the testing and dev space. Attendees can select from sessions across three tracks, connect with presenters through live chats, and participate in important discussions with peers.

Keynotes and sessions will cover topics including:

The virtual event coincides with the release of Kinsbruner's highly-anticipated third book, "Accelerating Software Quality: Machine Learning & Artificial Intelligence in the Age of DevOps." The book, spearheaded by Perforce, was written collaboratively by 20 DevOps industry experts and positions readers to make informed, strategic decisions as they adopt AI/ML technologies as part of their DevOps journey.

To register, see the agenda, and to learn more about DevOps next, click here.To preorder "Accelerating Software Quality", click here.

About PerforcePerforce powers innovation at unrivaled scale. With a portfolio of scalable DevOps solutions, we help modern enterprises overcome complex product development challenges by improving productivity, visibility, and security throughout the product lifecycle. Our portfolio includes solutions for Agile planning & ALM, API management,automated mobile & web testing,embeddable analytics, open source support, repository management, static & dynamic code analysis, version control, and more.With over 15,000 customers, Perforce is trusted by the world's leading brands to drive their business critical technology development. For more information, visitwww.perforce.com.

Media Contacts

PERFORCE GLOBALColleen KulhanekPerforce SoftwarePh: +1 612 517 2069ckulhanek@perforce.com

PERFORCE UK/EMEAMaxine AmbroseAmbrose CommunicationsPh: +44 118 328 0180perforcepr@ambrosecomms.com

PERFORCE USMichael DrazninWaters CommunicationsPh:+1 917 921 1039perforcepr@waterscomms.com

View original post here:

Perforce Launches Virtual Event on the Future of Intelligent and Data-Driven DevOps - Southernminn.com

Why Novak Djokovic Was Disqualified From the U.S. Open – The New York Times

Despite the clarity of the rules, Djokovic pleaded his case for several minutes, saying that the line judge would not need to go to a hospital. Friemel responded to him that the consequences might have been different had the line judge not collapsed to the ground and stayed there for a prolonged time in clear distress.

Djokovic also asked Friemel why he could not simply receive a point penalty or game penalty instead of being defaulted. Friemel did not, in fact, have an intermediate option. The code of conduct is an escalating scale in tennis with clearly defined steps: a warning followed by a point penalty followed by a game penalty, followed by a default. But the rules also allow officials the option of proceeding straight to a default after any rule violation if it is deemed sufficiently egregious.

As Djokovic had not yet received a warning during the match, Friemels only options were to warn him or default him: a part of the rule that Djokovic did not appear to be aware of. But after investigating on court, Friemel did not consider a warning because he concluded that the incident clearly warranted a default.

In the end, in any code violation there is a part of discretion to it, but in this instance, I dont think there was any chance of any opportunity of any other decision other than defaulting Novak, because the facts were so clear, so obvious, Friemel said on Sunday night. The line umpire was clearly hurt and Novak was angry, he hit the ball recklessly, angrily back and taking everything into consideration, there was no discretion involved.

Djokovic had earned $250,000 for reaching the fourth round of the U.S. Open.

Heres a quick look at the various rules at play:

Players shall not violently, dangerously or with anger hit, kick or throw a tennis ball within the precincts of the tournament site except in the reasonable pursuit of a point during a match (including warm-up). Violation of this Section shall subject a player to fine up to $20,000 for each violation. In addition, if such violation occurs during a match (including the warmup) the player shall be penalised in accordance with the Point Penalty Schedule hereinafter set forth. For the purposes of this Rule, abuse of balls is defined as intentionally hitting a ball out of the enclosure of the court, hitting a ball dangerously or recklessly within the court or hitting a ball with negligent disregard of the consequences.

Players shall at all times conduct themselves in a sportsmanlike manner and give due regard to the authority of officials and the rights of opponents, spectators and others. Violation of this Section shall subject a player to a fine up to $20,000 for each violation. In addition, if such violation occurs during a match (including the warmup), the player shall be penalised in accordance with the Point Penalty Schedule hereinafter set forth. In circumstances that are flagrant and particularly injurious to the success of a tournament, or are singularly egregious, a single violation of this Section shall also constitute the Major Offence of Aggravated Behaviour and shall be subject to the additional penalties hereinafter set forth.

Incidents of tennis players striking officials are rare, but not unprecedented. There were two high-profile incidents of similar defaults in mens tennis, though none as significant as the disqualification of a top-seeded player at a Grand Slam event.

Link:

Why Novak Djokovic Was Disqualified From the U.S. Open - The New York Times