Machine Learning Does Not Improve Upon Traditional Regression in Predicting Outcomes in Atrial Fibrillation: An Analysis of the ORBIT-AF and…

Aims

Prediction models for outcomes in atrial fibrillation (AF) are used to guide treatment. While regression models have been the analytic standard for prediction modelling, machine learning (ML) has been promoted as a potentially superior methodology. We compared the performance of ML and regression models in predicting outcomes in AF patients.

The Outcomes Registry for Better Informed Treatment of Atrial Fibrillation (ORBIT-AF) and Global Anticoagulant Registry in the FIELD (GARFIELD-AF) are population-based registries that include 74 792 AF patients. Models were generated from potential predictors using stepwise logistic regression (STEP), random forests (RF), gradient boosting (GB), and two neural networks (NNs). Discriminatory power was highest for death [STEP area under the curve (AUC) = 0.80 in ORBIT-AF, 0.75 in GARFIELD-AF] and lowest for stroke in all models (STEP AUC = 0.67 in ORBIT-AF, 0.66 in GARFIELD-AF). The discriminatory power of the ML models was similar or lower than the STEP models for most outcomes. The GB model had a higher AUC than STEP for death in GARFIELD-AF (0.76 vs. 0.75), but only nominally, and both performed similarly in ORBIT-AF. The multilayer NN had the lowest discriminatory power for all outcomes. The calibration of the STEP modelswere more aligned with the observed events for all outcomes. In the cross-registry models, the discriminatory power of the ML models was similar or lower than the STEP for most cases.

When developed from two large, community-based AF registries, ML techniques did not improve prediction modelling of death, major bleeding, or stroke.

View post:
Machine Learning Does Not Improve Upon Traditional Regression in Predicting Outcomes in Atrial Fibrillation: An Analysis of the ORBIT-AF and...

Machine Learning in Medical Imaging Market 2020 : Analysis by Geographical Regions, Type and Application Till 2025 | Zebra, Arterys, Aidoc, MaxQ AI -…

Global Machine Learning in Medical Imaging Industry: with growing significant CAGR during Forecast 2020-2025

Latest Research Report on Machine Learning in Medical Imaging Market which covers Market Overview, Future Economic Impact, Competition by Manufacturers, Supply (Production), and Consumption Analysis

Understand the influence of COVID-19 on the Machine Learning in Medical Imaging Market with our analysts monitoring the situation across the globe. Request Now

The market research report on the global Machine Learning in Medical Imaging industry provides a comprehensive study of the various techniques and materials used in the production of Machine Learning in Medical Imaging market products. Starting from industry chain analysis to cost structure analysis, the report analyzes multiple aspects, including the production and end-use segments of the Machine Learning in Medical Imaging market products. The latest trends in the pharmaceutical industry have been detailed in the report to measure their impact on the production of Machine Learning in Medical Imaging market products.

Leading key players in the Machine Learning in Medical Imaging market are Zebra, Arterys, Aidoc, MaxQ AI, Google, Tencent, Alibaba

Get sample of this report @ https://grandviewreport.com/sample/21159

Product Types:, Supervised Learning, Unsupervised Learning, Semi Supervised Learning, Reinforced Leaning

By Application/ End-user:, Breast, Lung, Neurology, Cardiovascular, Liver

Regional Analysis For Machine Learning in Medical ImagingMarket

North America(the United States, Canada, and Mexico)Europe(Germany, France, UK, Russia, and Italy)Asia-Pacific(China, Japan, Korea, India, and Southeast Asia)South America(Brazil, Argentina, Colombia, etc.)The Middle East and Africa(Saudi Arabia, UAE, Egypt, Nigeria, and South Africa)

Get Discount on Machine Learning in Medical Imaging report @ https://grandviewreport.com/discount/21159

This report comes along with an added Excel data-sheet suite taking quantitative data from all numeric forecasts presented in the report.

Research Methodology:The Machine Learning in Medical Imagingmarket has been analyzed using an optimum mix of secondary sources and benchmark methodology besides a unique blend of primary insights. The contemporary valuation of the market is an integral part of our market sizing and forecasting methodology. Our industry experts and panel of primary members have helped in compiling appropriate aspects with realistic parametric assessments for a comprehensive study.

Whats in the offering: The report provides in-depth knowledge about the utilization and adoption of Machine Learning in Medical Imaging Industries in various applications, types, and regions/countries. Furthermore, the key stakeholders can ascertain the major trends, investments, drivers, vertical players initiatives, government pursuits towards the product acceptance in the upcoming years, and insights of commercial products present in the market.

Full Report Link @ https://grandviewreport.com/industry-growth/Machine-Learning-in-Medical-Imaging-Market-21159

Lastly, the Machine Learning in Medical Imaging Market study provides essential information about the major challenges that are going to influence market growth. The report additionally provides overall details about the business opportunities to key stakeholders to expand their business and capture revenues in the precise verticals. The report will help the existing or upcoming companies in this market to examine the various aspects of this domain before investing or expanding their business in the Machine Learning in Medical Imaging market.

Contact Us:Grand View Report(UK) +44-208-133-9198(APAC) +91-73789-80300Email : [emailprotected]

See the rest here:
Machine Learning in Medical Imaging Market 2020 : Analysis by Geographical Regions, Type and Application Till 2025 | Zebra, Arterys, Aidoc, MaxQ AI -...

news and analysis for omnichannel retailers – Retail Technology Innovation Hub

Machine learning algorithms will learn patterns from the past data and predict trends and best price. These algorithms can predict the best price, discount price and promotional price based on competition, macroeconomic variables, seasonality etc.

To find out the correct pricing in real-time retailers follow the following steps:

Gather input data

In order to build a machine learning algorithm, retailers collect various data points from the customers. These are:

Transactional data

This includes the sales history of each customer and the products, which they have bought in the past.

Product description

The brands, product category, style, photos and the selling price of the previously sold products are collected. Past promotions and campaigns are also analysed to find the effect of price changes on each category.

Customer details

Demographic details and customer feedback are gathered.

Competition and inventory

Retailers also try to find the data regarding the price of products sold by their competitors and supply chain and inventory data.

Depending on the set of key performance indicators defined by the retailers, the relevant data is filtered.

For every industry, pricing would involve different goals and constraints. In terms of the dynamic nature, the retail industry can be compared to the casino industry where machine learning is involved inonline live dealer casino games too.

Like casinos, retail also has the target of profit maximisation and retention of customer loyalty. Each of these goals and constraints can be fed to a machine learning algorithm to generate dynamic prices of products.

Visit link:
news and analysis for omnichannel retailers - Retail Technology Innovation Hub

This know-it-all AI learns by reading the entire web nonstop – MIT Technology Review

This is a problem if we want AIs to be trustworthy. Thats why Diffbot takes a different approach. It is building an AI that reads every page on the entire public web, in multiple languages, and extracts as many facts from those pages as it can.

Like GPT-3, Diffbots system learns by vacuuming up vast amounts of human-written text found online. But instead of using that data to train a language model, Diffbot turns what it reads into a series of three-part factoids that relate one thing to another: subject, verb, object.

Pointed at my bio, for example, Diffbot learns that Will Douglas Heaven is a journalist; Will Douglas Heaven works at MIT Technology Review; MIT Technology Review is a media company; and so on. Each of these factoids gets joined up with billions of others in a sprawling, interconnected network of facts. This is known as a knowledge graph.

Knowledge graphs are not new. They have been around for decades, and were a fundamental concept in early AI research. But constructing and maintaining knowledge graphs has typically been done by hand, which is hard. This also stopped Tim Berners-Lee from realizing what he called the semantic web, which would have included information for machines as well as humans, so that bots could book our flights, do our shopping, or give smarter answers to questions than search engines.

A few years ago, Google started using knowledge graphs too. Search for Katy Perry and you will get a box next to the main search results telling you that Katy Perry is an American singer-songwriter with music available on YouTube, Spotify, and Deezer. You can see at a glance that she is married to Orlando Bloom, shes 35 and worth $125 million, and so on. Instead of giving you a list of links to pages about Katy Perry, Google gives you a set of facts about her drawn from its knowledge graph.

But Google only does this for its most popular search terms. Diffbot wants to do it for everything. By fully automating the construction process, Diffbot has been able to build what may be the largest knowledge graph ever.

Alongside Google and Microsoft, it is one of only three US companies that crawl the entire public web. It definitely makes sense to crawl the web, says Victoria Lin, a research scientist at Salesforce who works on natural-language processing and knowledge representation. A lot of human effort can otherwise go into making a large knowledge base. Heiko Paulheim at the University of Mannheim in Germany agrees: Automation is the only way to build large-scale knowledge graphs.

To collect its facts, Diffbots AI reads the web as a human wouldbut much faster. Using a super-charged version of the Chrome browser, the AI views the raw pixels of a web page and uses image-recognition algorithms to categorize the page as one of 20 different types, including video, image, article, event, and discussion thread. It then identifies key elements on the page, such as headline, author, product description, or price, and uses NLP to extract facts from any text.

Every three-part factoid gets added to the knowledge graph. Diffbot extracts facts from pages written in any language, which means that it can answer queries about Katy Perry, say, using facts taken from articles in Chinese or Arabic even if they do not contain the term Katy Perry.

Browsing the web like a human lets the AI see the same facts that we see. It also means it has had to learn to navigate the web like us. The AI must scroll down, switch between tabs, and click away pop-ups. The AI has to play the web like a video game just to experience the pages, says Tung.

Diffbot crawls the web nonstop and rebuilds its knowledge graph every four to five days. According to Tung, the AI adds 100 million to 150 million entities each month as new people pop up online, companies are created, and products are launched. It uses more machine-learning algorithms to fuse new facts with old, creating new connections or overwriting out-of-date ones. Diffbot has to add new hardware to its data center as the knowledge graph grows.

Researchers can access Diffbots knowledge graph for free. But Diffbot also has around 400 paying customers. The search engine DuckDuckGo uses it to generate its own Google-like boxes. Snapchat uses it to extract highlights from news pages. The popular wedding-planner app Zola uses it to help people make wedding lists, pulling in images and prices. NASDAQ, which provides information about the stock market, uses it for financial research.

Adidas and Nike even use it to search the web for counterfeit shoes. A search engine will return a long list of sites that mention Nike trainers. But Diffbot lets these companies look for sites that are actually selling their shoes, rather just talking about them.

For now, these companies must interact with Diffbot using code. But Tung plans to add a natural-language interface. Ultimately, he wants to build what he calls a universal factoid question answering system: an AI that could answer almost anything you asked it, with sources to back up its response.

Tung and Lin agree that this kind of AI cannot be built with language models alone. But better yet would be to combine the technologies, using a language model like GPT-3 to craft a human-like front end for a know-it-all bot.

Still, even an AI that has its facts straight is not necessarily smart. Were not trying to define what intelligence is, or anything like that, says Tung. Were just trying to build something useful.

Go here to see the original:
This know-it-all AI learns by reading the entire web nonstop - MIT Technology Review

Python Is About to Get the Squeeze – Built In

Python was released in the 1990s as a general-purpose programming language. Despite its clean syntax, the exposure Python got in its first decade wasntencouraging, and itdidnt really find inroads into the developers workspace. Perl was the first choice scripting language and Java had established itself as the go-to in the object-oriented programming arena. Of course, any language takes time to mature and only gets adopted when its better suited to a task than the existing tools.

For Python, that time first arrived during the early 2000s when people started realizing it has an easier learning curve than Perl and offers interoperability with other languages. This realization led to a larger number of developers incorporating Python into their applications. The emergence of Django eventually led to the doom of Perl, and Python started gaining more momentum. Still, it wasnt even close in popularity to Java and JavaScript, both of which were newer than Python.

Fast forward to the present, and Python has trumped Java to become the second-most-popular language according to the StackOverflow Developer Survey 2019. It was also the fastest-growing programming language of the previous decade. Pythons rise in popularity has a lot to do with the emergence of big data in the 2010s as well asdevelopments in machine learning and artificial intelligence. Businesses urgently required a language for quick development with low barriers of entry that could help manage large-scale data and scientific computing tasks. Python was well-suited to all these challenges.

Besides having those factors in its favor, Python was an interpreted language with dynamic typing support. More importantly, it had the backing of Google, whod invested in Python for Tensorflow, which led to its emergence as the preferred language for data analysis, visualization, and machine learning.

Yet, despite the growing demand for machine learning and AI at the turn of this decade, Python wont stay around for long. Like every programming language, it has its own set of weaknesses. Those weaknesses make it vulnerable to replacement by languages more suited to the common tasks businesses ask of them. Despite the presence of R, the emergence of newer languages such as Swift, Julia, and Rust actually poses a bigger threat to the current king of data science.

Rust is still trying to catch up with the machine learning community, and so I believe Swift and Julia are the languages that will dethrone Python and eventually rule data science. Lets see why odds are against Python.

All good things come at a cost, and Pythons dynamically typed nature is no exception. It hampers developers, especially when running the code in production. Dynamic typing that makes it easy to write code quickly without defining types increases the risk of running into runtime issues, especially when the codebase size increases. Bugs that a compiler would easily figure out could go unidentified in Python, causing hindrances in production and ultimately slowing the development process in large-scale applications.

Worse, unlike compiled code, Pythons interpreter analyzes every line of code at execution time. This leads to an overhead that causes a significantly slower performance when compared to other languages.

Julia allows you to avoid some of these problems. Despite being dynamically typed, it has a just-in-time compiler. The JIT compiler either generates the machine code right before its executed or uses previously stored, cached compilations, which makes it as performant as statically typed languages. More importantly, it has a key feature known as multiple dispatch thatis like function overloading of OOPs, albeit at runtime. The power of multiple dispatch lies in its ability to handle different argument types without the need to create separate function names or nested if statements. This helps in writing compact code, which is a big win in numeric computations since unlike Python, you can easily scale solutions to deal with all types of arguments.

Even better, Swift is a statically typed language and is highly optimized due to its LLVM (Low-Level Virtual Machine) compiler. The LLVM makes it possible to quickly compile into assembly code, making Swift super-efficient and almost as fast as C. Also, Swift boasts better memory safety and management tools known as Automatic Reference Counting. Unlike garbage collectors, ARC is a lot more deterministic as it reclaims memory whenever the reference count hits zero.

As compiled languages that offer type annotations, Swift and Julia are a lot faster and robust for development than Python. That alone might be enough to recommend them over the older language, but there are other factors to consider as well.

If slowness was not the most obvious drawback of Python, the language also has limitations with parallel computing.

In short, Python uses GIL (Global Interpreter Lock), which prevents multiple threads from executing at the same time in order to boost the performance of single threads. This process is a big hindrance because it means that developers cannot use multiple CPU cores for intensive computing.

I agree withthe commonplace notion that were currently doing finewhen leveraging Pythons interoperability with C/C++ libraries like Tensorflow and PyTorch. But a Python wrapper doesnt solve all debugging issues. Ultimately, when inspecting the underlying low-level code, were falling back on C and C++. Essentially, we cant leverage the strengths of Python at the low level,which puts it out of the picture.

This factor will soon play a decisive role in the fall of Python and rise of Julia and Swift. Julia is a language exclusively designed to address the shortcomings of Python. It primarily offers three features: coroutines (asynchronous tasks), multi-threading, and distributed computing all of which only show the immense possibilities for concurrent and parallel programming. This structure makes Julia capable of performing scientific computations and solving big data problems at a far greater speed than Python.

On the other hand, Swift possesses all the tools required for developing mobile apps and has no problems with parallel computing.

Despite the disadvantages it has with respect to speed, multi-threading, and type-safety, Python still has a huge ecosystem that boasts an enormous set of libraries and packages. Understandably, Swift and Julia are still infants in the field of machine learning and possess only a limited number of libraries. Yet, their interoperability with Python more than compensates for the lack of library support in Julia and Swift.

Julia not only lets programmers use Python code (and vice-versa), but also supports interoperability with C, R, Java, and almost every major programming language. This versatility would certainly give the language a good boost and increase its chances of a quick adoption among data scientists.

Swift, on the other hand, provides interoperability with Python with the PythonKit library.The biggest selling point for Swift (which has an Apple origin) is a strong support its been getting from Google, who fully backed Python decades ago. See how the tables have turned!

Also, the fact that the creator of Swift, Chris Lattner, is now working on Googles AI brain team just shows that Swift is being seriously groomed as Pythons replacement in the machine learning field.The Tensorflow team investing in Swift with their S4TF project only further proves that the language isnt merely regarded as a wrapper over Python. Instead Swift, thanks to its differential programming support and ability to work at a low level like C, will potentially be used to replace the underlying deep learning tools.

As the size of data continues to increase, Pythons Achilles heel will be soon found out. Gone are the days when ease of use and ability to write code quickly mattered. Speed and parallel computing are the name of the game and Python, which is a more general-purpose language, will no longer solve that problem. Inevitably, it will fade away while Julia and Swift seem like the candidates to take over the reins.

Its important to note that Python as a programming language wont disappear any time soon. Instead, itll only take a backseat in data science as the languages that are more specifically designed for deep learning will rule.

View post:
Python Is About to Get the Squeeze - Built In

Why use of Open Source must be accounted for in Business Continuity Planning of a business – Lexology

Today, use of Open Source Software is a norm to develop Commercial Software by the businesses. A critical aspect at stake for such businesses remains to ensure business continuity at all cost.

Could use of Open Source Software hamper the business continuity? We investigated a landmark security event in the history of Open Source Software to vet this question.

The Comfort or peril of using Open Source Software

Open Source Software provides unique building blocks for Software development in todays Age. The time, effort and investment, that would have been required to build similar Software from scratch using a dedicated team in-house, is unimaginable! Most companies today understand the trade-off very clearly and therefore allow theuseof Open Source Software in their Projects while greenfield development is kept focused where the core Intellectual Property of such companies lie.

Companies often frame detailed Policies to regulate the use of Open Source Software, which we will discuss in further details in later blog posts, from our experience of creating Open Source Policies for different types of Companies.

So, there is an evident Comfort in using Open Source, yet such use of Open Source is not without its own peril.

Remember Heartbleed!

Heartbleed was detected as a serious security vulnerability on1st April, 2014which led to many commentators gasp and note that this could be the worst kind of vulnerability seen right from the beginning of Internet.

Just to refresh the memory, it is suspected that while updating OpenSSL withHeartbeatextension, the vulnerability might have crept into the Source code due to a possible oversight of coding error of the developers as far back as 2012. Though often contested, it is also suspected that some busybodies might have exploited the vulnerability at least five months prior to its eventual detection and disclosure to the public.

Codenomicon, now acquired by Synopsys, maintains a page athttps://heartbleed.com/, which works as a constant remembrance of the vulnerability that had massive effect on IT Infrastructure maintainers across the Globe.

But then we ask, what happens when such a vulnerability gets detected and worse even, when a threat actor exploits such a vulnerability.

The typical response of the businesses range from

immediately stopping the access to such a Software completely and introducing a patch to rectify such a vulnerability before re-initiating the access, ...to negotiating with threat actor for recovery of lost data while tackling questions from Privacy regulators as well as the data owners in parallel.

Amid, all of this, theBusiness Continuitygets lost and further raising serious questions on the Companys readiness to ensure business continuity.

What could cause such a peril?

While facing Heartbleed, a pertinent question got asked- was it avoidable?

Through Steve Marquess, the then CEO of the OpenSSL Software Foundationsblogpost, we get to see a rare view of the inner workings of such a popular Open Source Project. Steve says, that most of the revenue collected by OpenSSL Software Foundation comes from two streams: Donations and Commercial Contract for support.

Though the project had been very popular even before Heartbleed happened, it is evident thatthe flow of money into the project had been scarce, when compared to what would be required to maintain a project of such scale. Also, it seems developers, who were mostly working part-time on the project, were driven by their passion to create, above everything else and even though highly capable, an oversight could not be ruled out.

Post Heartbleed, when the issue came to fore, financial support got extended from different quarters.

Even at the peril of hindsight bias, it could be argued that even though the incident was not entirely unavoidable or at least for the businesses that got affected, unpredictable, at least from the clear identifier that such an important project had such an informal approach towards maintenance.

What should bother most businesses is the propensity of such an error due to lack of support to most Open Source Projects, at least in their initial stages and without formal backers, is more of a rule rather than exception.

So, what do you do?

Review.The first step starts at determining the Open Source Software that matters to your business. While advising our Clients, this is the most common response that we receive- our Codes have grown organically, we arent able to distinguish Open Source Software from the codes written in-house.

Revise.Check and rectify whether there are Open Source Software with known identifiers that can cause you trouble. If need arises, plan to move towards mature and robust Open Source Software, if you intend to not move to developing inhouse, and better, contribute to the Community through Donations, Commercial support or Vulnerability rectifications. It is always a great practice to remain invested in the Community that supports your business.

Prepare to Respond.Finding a vulnerability is inevitable. The strength of business is a function of how the business responds to a crisis. So, define a response plan in case aHeartbleedstrikes your business, including who would respond to it, how would you arrest it, how would you communicate to your Clients, Partners, Regulators etc.

Remember, your comfort in using Open Source Software must not affect yourBusiness Continuity.

Go here to see the original:

Why use of Open Source must be accounted for in Business Continuity Planning of a business - Lexology

Consumer IoT is broken and our stupid optimism is to blame – Stacey on IoT

Some of us have a closet full of connected devices that at one time represented the cutting edge of smart home tech devices like the Revolv hub, the Lighthouse camera, the Jibo robot, the Petnet feeder, and the original Sonos speakers. And while most of us were able to shrug off the end of Juicero and the related loss of $400 as the inevitable cost of living the early adopter lifestyle, the perception of expensive, short-lived gadgets haunts consumer IoT.

Every time these stories hit the tech press or the mainstream media, a much larger group of people congratulate themselves for not buying into the latest hype around connected devices and smart homes. They recognize that this tech is new, unproven, and likely not as convenient or necessary as its creators claim. Which is why even more than the lack of standards that we in the smart home world constantly bemoan the lack of faith in the life of a connected product hurts the IoT. After all, if you cant convince someone to buy a connected product in the first place, theyll never reach the point where theyll want it to interoperate with other devices.

So whats the industry to do? The common demand after a product fails and the companies that make them tell their customers theyre turning off the servers (if they do, in fact, tell customers that) is to open-source the device code so the tech-savvy early adopters can keep the device operational. But while this sounds great, its akin to cryonically freezing your head in the hopes of coming back to life after death.

Getting the code running on a connected device without having access to the backend cloud code and the application code may allow the device to run, but the overall experience will suffer. The device may work, but it wont have a good interface (or youll have to build and maintain it) and it wont have a cloud component for remote access and other functionality (unless you build and maintain it). Just like your newly thawed head will need a body, the open-source device code needs someone to build and maintain a cloud backend and a mobile application.

There are third-party companies such asDigital Dream Labs, which raised funds to take over the support and development of the Anki Vector robot, that are attempting to take device code and build infrastructure around it. But doing so requires expertise, time, and money.Are customers willing to pay someone a second time just to keep their devices running?

So when asking a company to open up its source code for a connected device, ask yourself if your frozen brain is successfully revived, if that would be enough for you.

Such partial resurrection was the plan all the way back in 2015 when we starting seriously discussing what happens when connected products fail. Some companies put code in escrow so people could have it later and maintain it. In the meantime, I encouraged venture firms, entrepreneurs, and development shops to think about failure and how their product might gracefully degrade to give the consumer time to come to terms with the loss.

Now I realize that keeping code in escrow and thinking about failure are only part of the solution. Granted, because people crazy enough to build a connected pet feeder are often stupidly optimistic, its still good advice. Please do think about what happens if your business fails so you can build a decent experience for the end consumer. Also, set milestones that indicate failure so you can warn your customers in time and perhaps allocate some of the diminished cash on hand to ensuring a graceful shutdown.

By far the greatest challenge for connected device companies is the ongoing cost of operating those devices. An investor in Mellow, the maker of a connected sous vide machine that recently told customers they needed to pay a subscription fee or see certain features vanish, explained the issue. He told me that the company has roughly $4,000 in monthly costs associated with the device and those costs go up the more people use it. And that doesnt include the ongoing costs associated with having a developer update the Android and iOS apps.

Some of those monthly bills might be lowered by choosing a different cloud architecture or security platform, but every connected device company has to account for ongoing maintenance costs. And the more features a company adds and the more customers it has, the higher those costs tend to go. At an event I hosted in August, Matt Van Horn, CEO of June Life, said that his companys cloud bill continues to rise, and he doesnt have the resources or cloud infrastructure that Amazon or Google do.

So one option might be to only buy gadgets made by those companies, since I doubt AWS is going to shut Alexa down if that business unit stops paying its cloud bills. But thats a really limiting option for consumers and for innovation in the sector overall. Nate Williams, a former employee at August and now an investor at Union Labs Ventures, says he thinks some kind of model built around an independent organization that companies pay into, and that will operate and support a device and the supporting server code going forward, might help.

He initially likened it to a homeowners association for smart home devices, but given the negative connotations around HOAs then clarified that he was seeking a sense of shared responsibility as opposed to something punitive. But I think having a little enforcement might actually be good. We could see companies pay into an organization that ensures a product has a year or 6 months of cloud and developer costs in escrow to at least ensure a failed company can keep a product running for a little while longer after giving customers notice that it will die.

That organization should also have some sort of provision for getting the remaining stock of a defunct product off the shelves. Indeed, Id love to see retailers like Best Buy or Amazon get involved. Kickstarter or Indiegogo might also be good members of such an organization to add a little more credibility to the products launched on their platform.

This sort of upfront cash that would be held in escrow to cover six months of cloud and developer costs would be a burden for smaller startups or folks trying to build something in their garage. It would be great to see scholarships or other models arise that could pay those costs for a company that cant otherwise afford it. It could be kind of like a pension plan for IoT devices.

This may not be the right solution, but failed consumer IoT devices or abrupt changes in the business model for connected devices are a very real problem that holds back adoption. Id love to see us set aside optimism so we could focus on what to do if the companies behind these products fail.

Related

Continued here:

Consumer IoT is broken and our stupid optimism is to blame - Stacey on IoT

How a new open-source tool can help businesses in the fight against malware – TEISS

New software from BlackHat makes reverse-engineering malware faster and easier for software engineers.

Reverse-engineering of malware is an extremely time- and labour-intensive process, which can involve hours of disassembling and sometimes deconstructing a software programme. The BlackBerry Research and Intelligence team initially developed this open-source tool for internal use, and is now making it available to the malware reverse-engineering community.

PE Tree is developed in Python and supports Windows, Linux and Mac operating systems. It can be installed and run as either a standalone application. Aimed at the reverse engineering community, PE Tree also integrates with HexRays IDA Pro decompiler to allow for easy navigation of PE structures, as well as dumping in-memory PE files and performing import reconstruction.

Image credit: Tom Bonner, Distinguished Threat Researcher, BlackBerry

The cyber-security threat landscape continues to evolve and cyber-attacks are getting more sophisticated with potential to cause greater damage, said Eric Milam, Vice President of Research Operations, BlackBerry. As cyber-criminals up their game, the cyber-security community needs new tools in their arsenal to defend and protect organisations and people. Weve created this solution to help the cyber-security community in this fight, where there are now more than one billion pieces of malware with that number continuing to grow by upwards of 100 million pieces each year.

PE Tree enables reverse-engineers to view Portable Executable (PE) files in a tree-view, using pefile and PyQt5, thereby lowering the bar for dumping and reconstructing malware from memory while providing an open-source PE viewer code-base that the community can build upon. The tool also integrates with Hex-Rays IDA Pro decompiler to allow for easy navigation of PE structures, as well as dumping in-memory PE files and performing import reconstruction which are critical in the fight to identify and stop various strains of malware.

To learn more and to access the PE Tree source code, please visit theBlackBerry GitHub account.

To read more, please visit the blog post here.

by Tom Bonner, Distinguished Threat Researcher, BlackBerry

Read the rest here:

How a new open-source tool can help businesses in the fight against malware - TEISS

Google Cloud launches its Business Application Platform based on Apigee and AppSheet – TechCrunch

Unlike some of its competitors, Google Cloud has recently started emphasizing how its large lineup of different services can be combined to solve common business problems. Instead of trying to sell individual services, Google is focusing on solutions and the latest effort here is what it calls its Business Application Platform, which combines the API management capabilities of Apigee with the no-code application development platform of AppSheet, which Google acquired earlier this year.

As part of this process, Google is also launching a number of new features for both services today. The company is launching the beta of a new API Gateway, built on top of the open-source Envoy project, for example. This is a fully managed service that is meant to make it easier for developers to secure and manage their API across Googles cloud computing services and serverless offerings like Cloud Functions and Cloud Run. The new gateway, which has been in alpha for a while now, offers all the standard features youd expect, including authentication, key validation and rate limiting.

As for its low-code service AppSheet, the Google Cloud team is now making it easier to bring in data from third-party applications thanks to the general availability to Apigee as a data source for the service. AppSheet already supported standard sources like MySQL, Salesforce and G Suite, but this new feature adds a lot of flexibility to the service.

With more data comes more complexity, so AppSheet is also launching new tools for automating processes inside the service today, thanks to the early access launch of AppSheet Automation. Like the rest of AppSheet, the promise here is that developers wont have to write any code. Instead, AppSheet Automation provides a visual interface, that, according to Google, provides contextual suggestions based on natural language inputs.

We are confident the new category of business application platforms will help empower both technical and line of business developers with the core ability to create and extend applications, build and automate workflows, and connect and modernize applications, Google notes in todays announcement. And indeed, this looks like a smart way to combine the no-code environment of AppSheet with the power of Apigee .

Continued here:

Google Cloud launches its Business Application Platform based on Apigee and AppSheet - TechCrunch

Why Novak Djokovic Was Disqualified From the U.S. Open – The New York Times

Despite the clarity of the rules, Djokovic pleaded his case for several minutes, saying that the line judge would not need to go to a hospital. Friemel responded to him that the consequences might have been different had the line judge not collapsed to the ground and stayed there for a prolonged time in clear distress.

Djokovic also asked Friemel why he could not simply receive a point penalty or game penalty instead of being defaulted. Friemel did not, in fact, have an intermediate option. The code of conduct is an escalating scale in tennis with clearly defined steps: a warning followed by a point penalty followed by a game penalty, followed by a default. But the rules also allow officials the option of proceeding straight to a default after any rule violation if it is deemed sufficiently egregious.

As Djokovic had not yet received a warning during the match, Friemels only options were to warn him or default him: a part of the rule that Djokovic did not appear to be aware of. But after investigating on court, Friemel did not consider a warning because he concluded that the incident clearly warranted a default.

In the end, in any code violation there is a part of discretion to it, but in this instance, I dont think there was any chance of any opportunity of any other decision other than defaulting Novak, because the facts were so clear, so obvious, Friemel said on Sunday night. The line umpire was clearly hurt and Novak was angry, he hit the ball recklessly, angrily back and taking everything into consideration, there was no discretion involved.

Djokovic had earned $250,000 for reaching the fourth round of the U.S. Open.

Heres a quick look at the various rules at play:

Players shall not violently, dangerously or with anger hit, kick or throw a tennis ball within the precincts of the tournament site except in the reasonable pursuit of a point during a match (including warm-up). Violation of this Section shall subject a player to fine up to $20,000 for each violation. In addition, if such violation occurs during a match (including the warmup) the player shall be penalised in accordance with the Point Penalty Schedule hereinafter set forth. For the purposes of this Rule, abuse of balls is defined as intentionally hitting a ball out of the enclosure of the court, hitting a ball dangerously or recklessly within the court or hitting a ball with negligent disregard of the consequences.

Players shall at all times conduct themselves in a sportsmanlike manner and give due regard to the authority of officials and the rights of opponents, spectators and others. Violation of this Section shall subject a player to a fine up to $20,000 for each violation. In addition, if such violation occurs during a match (including the warmup), the player shall be penalised in accordance with the Point Penalty Schedule hereinafter set forth. In circumstances that are flagrant and particularly injurious to the success of a tournament, or are singularly egregious, a single violation of this Section shall also constitute the Major Offence of Aggravated Behaviour and shall be subject to the additional penalties hereinafter set forth.

Incidents of tennis players striking officials are rare, but not unprecedented. There were two high-profile incidents of similar defaults in mens tennis, though none as significant as the disqualification of a top-seeded player at a Grand Slam event.

Link:

Why Novak Djokovic Was Disqualified From the U.S. Open - The New York Times