Reflect brings automated no-code web testing to the cloud – VentureBeat

Every company is now a software company, or so were told, meaning they have to employ designers and developers capable of building websites and apps. In tandem, the much-reported software developer shortage means companies across the spectrum are in a constant battle for top talent. This is opening the doors to more automated tools that democratize some of the processes involved in shipping software, while freeing developers to work on other mission-critical tasks.

Its against this backdrop that Reflect has come to market, serving as an automated, end-to-end testing platform that allows businesses to test web apps from an end users perspective, identifying glitches before they go live. Founded out of Philadelphia in 2019, the Y Combinator (YC) alum today announced a $1.8 million seed round of funding led by Battery Ventures and Craft Ventures, as it looks to take on incumbents with a slightly different proposition.

Similar to others in the space, Reflect hooks into the various elements of a browser so it can capture actions the user is taking, including scrolls, taps, clicks, hovers, field entry, and so on. This can be replicated later as part of an automated test to monitor the new user signup flow for a SaaS app, for example. If the test later throws up an error, perhaps due to a change made to the user interface, the quality assurance (QA) team can be notified instantly with a full video reproducing the bug, along with relevant logs.

Above: Reflect: Viewing a replay

There are a number of notable players in the automated web testing space, including open source testing framework Seleniumand Cypress, which raised a $40 million funding round just last month. And the low-to-no code space has the likes of Testim, which also covers native mobile apps, and GV-backed Mabl, which was launched by two former Googlers back in 2018. But Reflect is setting out to differentiate its offering in a number of ways.

First up, rather than using browser extensions that record actions locally, Reflect records actions via a virtual machine (VM) in its cloud and screen-shares it back to the user through the Reflect web app. This helps eliminate the causes of common recording errors, like cookies, VPNs, or extensions such as ad blockers that may impact the state of the browser.

In short, Reflect standardizes the testing environment and neutralizes potential inferences, all without requiring any installations.

This approach lets us completely control the test environment, which means we more accurately capture each action you take when testing your site, even for complex actions like drag-and-drops or file uploads, Reflect cofounder Todd McNeal told VentureBeat.

Above: Reflect: Recording a test

McNeal said the company has already amassed more than 80 paying customers who subscribe through a SaaS model that starts at free for three users and 30 minutes of execution time per month. Starter, standard, and enterprise plans offer more features and flexibility.

There are potential downsides to handing full control to a third-partys cloud. Many businesses, particularly larger enterprises, would be more comfortable with an on-premises Reflect installation, something that offers them more control, which would be pertinent if Reflect ever went bust. An open source route might also make some sense for this reason, affording companies greater freedom in terms of how they deploy Reflect. But that would come with major trade-offs in terms of Reflects no-code aspirations.

On-premise installation is something we may add in the future. It has come up with larger enterprises, for sure, McNeal said. Were not considering the open source route though our goal, and what we think the market is looking for, is something that hides away the complexities, and we think the best way to do that is via the no-code approach.

Being truly no-code, as McNeal puts it versus low-code, which may require some form of coding expertise to script specific actions could also help it become the go-to tool for non-developers.

It means that you can truly give our product to anyone in the organization it doesnt have to be just developers, McNeal said. Also, since we dont have the crutch of code to fall back to, it ensures that our recorder needs to be accurate in order to allow customers to test these complex actions.

Its worth noting that Reflect also offers an API and direct CI/CD integrations, enabling its customers to integrate Reflect deeper into their DevOps processes and schedule tests after every deployment, for example, or even after every pull request.

The broader no-code movement has emerged as a major trend in recent years, with Gartner predicting in a 2019 report that by 2023 citizen developers within large enterprises will outnumber professional developers by at least 4 times. This shift is evidenced by a flurry of activity across the space over the past year, with the likes of Amazons AWS launching a no-code app development platform called Honeycode, while Google last year snapped up enterprise-focused AppSheet. Earlier this month, no-code development platform Webflow raised $140 million at a $2.1 billion valuation.

Its clear what benefits automated, no-code platforms could bring to smaller businesses, but why would larger enterprises with plenty of resources be drawn to such tools?

It comes down to what we consider the biggest problems with automated end-to-end testing tools today tests take too long to create and theyre too difficult to maintain, McNeal said. At an enterprise, you have the resources to make this work. You can afford to have developers working full-time on this, who have expertise in the tool necessary to build and maintain your own custom test framework and a suite of code-based tests. But if you can get the same result the same peace of mind that your application works with a lot less time and effort, we think thats a pretty compelling value proposition.

Moreover, even the largest companies have to battle to hire and retain their top technical talent and ensure their time is optimized. By going no-code, they can delegate more QA work to less technically skilled personnel.

It lets enterprises take full advantage of testers in their organization that arent developers, McNeal added. Whereas today those testers are doing primarily manual testing, Reflect actually lets a tester with no coding experience build and maintain entire test suites without any developer intervention.

It is still early days for Reflect. Although its showing some promise, it lacks some of the smarts of its rivals, such as AI or machine learning that can adapt and self-improve over time. However, this is on its roadmap.

Our approach thus far has been to really get the underpinnings of the product correct, and thats rooted in accurately capturing and replicating the actions the user takes in the browser, McNeal said. Well be augmenting this with ML in the future.

Continue reading here:

Reflect brings automated no-code web testing to the cloud - VentureBeat

Android 12 looks set to borrow one of the best iOS features – Wi-Fi sharing – TechRadar

While Android 11 didn't bring many new features to Android phones it seems 2021's version of the Google-built software will, as we've now heard about one great Android 12 feature.

This comes from GizChina, which spotted a new entry to the Android Open Source Project (AOSP) regarding a 'Share Wi-Fi' feature. This apparently shares Wi-Fi passwords with nearby devices of your choosing, even ones in other rooms to you.

Android is open-source software, meaning people can copy it, tweak it, and make it their own if they want - indeed most phone makers do so for their own handsets. New entries to it, and changes to the source code (as this is) are therefore likely to be new Android features.

This new AOSP entry hasn't been 'merged' yet, which basically means it's not official, but since it came from a Google engineer it could get merged soon. Until then we can't say for sure if it's coming to Android 12, but by the sounds of it, we'd sure hope so.

Current builds of Android let you share Wi-Fi via QR codes, so if you're connected to the Wi-Fi you can get your phone to provide a QR code, and the person who needs the Wi-Fi can scan the code to get access to the internet.

The system works, but it can be a bit fiddly, especially for people who don't know how to scan QR codes or don't have a readily-available scan app.

Apple products like those on iOS, iPadOS and macOS have a much better system - if somebody near you starts trying to connect to the internet, a prompt appears on your device letting you share your Wi-Fi information with them. It's particularly handy if you've got multiple devices you're trying to connect to the web.

That's a beloved feature among Apple fans, and it certainly is handy, so a similar feature in Android phones would be appreciated.

We'll have to wait and see if this feature does make its way to Android 12 at the end of the year when that operating system is set to launch, though the roll-out of various betas in the first half of the year should give us some clues.

Follow this link:

Android 12 looks set to borrow one of the best iOS features - Wi-Fi sharing - TechRadar

Acronis SCS and Leading Academics Partner to Develop AI-based Risk Scoring Model – Unite.AI

U.S. cyber protection company Acronis SCS has partnered with leading academics to improve software through the use of artificial intelligence (AI). The collaboration developed an AI-based risk scoring model capable of quantitatively assessing software code vulnerability.

The new model demonstrated a 41% improvement at detecting common vulnerabilities and exposures (CVEs) during its first stage of analysis. The following tests resulted in equally impressive results, and Acronis SCS is set to share the model upon its completion.

One of the greatest aspects of this technology is that it can be utilized by other software vendors and public sector organizations. Through its use, software supply chain validation can be improved without hurting innovation or small business opportunity, and it is an affordable tool for these organizations.

Acronis SCS AI-based model relies on a deep learning neural network that scans through both open-source and proprietary source code. It can provide impartial quantitative risk cores that IT administrators can then use to make accurate decisions involving the deployment of new software packages and updating existing ones.

The company uses language model to embed code. A type of deep learning, language model combines an embedding layer with a recurrent neural network (RNN). Up-sampling techniques and classification algorithms such as boosting, random forests, and neural networks are used to measure the model.

Dr. Joe Barr is Acronis SCS Senior Director of Research.

We use language model to embed code. Language model is a form of deep learning which combines an embedding layer with recurrent neural network (RNN), Dr. Barr told Unite.AI.

The input consists of function pairs (function, tag) and the output is a probability P(y=1 | x) that a function is vulnerable to hack (buggy). Because positive tags are rare, we use various up-sampling techniques and classification algorithms (like boosting, random forests and neural networks). We measure goodness by ROC/AUC and a percentile lift (number of bads in top k percentile, k=1,2,3,4,5).

Another great opportunity for this technology is its ability to make the validation process far more efficient.

Supply chain validation, placed inside a validation process, will help identify buggy/vulnerable code and will make the validation process more efficient by several orders of magnitude, he continued.

As with all AI and software, it is crucial to understand and address any potential risks. When asked if there are any risks unique to open source software (OSS), Dr. Barr said there are both generic and specific.

There are generic risks and specific risks, he said. The generic risk includes innocent bugs in the code which may be exploited by a nefarious actor. Specific risks relate to an adversarial actor (like state-sponsored agency) who deliberately introduces bugs into open source to be exploited at some point.

The initial results of the analysis was published in IEEE titled Combinatorial Code Classification & Vulnerability.

See the rest here:

Acronis SCS and Leading Academics Partner to Develop AI-based Risk Scoring Model - Unite.AI

AWS wants to tempt customers into switching to Linux – TechRadar

Another tech giant has thrown its weight behind Linux partnerships after Amazon Web Services (AWS) praised the system when launching the source code for its latest open source tool on GitHub.

The open source Porting Assistant for .NET is designed to scan .NET apps and list the things that need to be fixed in order to port the app to Linux. This, AWS argues, will help customers take advantage of the performance, cost savings, and robust ecosystem of Linux.

This choice of words has to be taken in context with the release of the AWS UI, which the company describes as just the first step in a larger process of creating a new open source design system.

As per reports, these recent releases are part of a larger move in the company to switch to JavaScript/TypeScript and React in order to build cross-platform user interface components, getting the benefit of being able to share libraries between web and desktop.

The basis for this assumption is two-pronged. First is the fact that the user interface for the Porting Assistant for .NET is written in React, although it could have just as easily been developed in .NET.

It is seconded by the release of AWS UI, which the company describes as a collection of React components that help create intuitive, responsive, and accessible user experiences for web applications."

While AWS doesnt create client applications, its embrace of React and this move towards what it describes as a new open source design system is perhaps done with the purpose of easing access to its services.

Its argued that switching to a new open source, platform-agnostic design methodology will surely make AWS services easier to consume and increase their adoption.

Via: The Register

Read this article:

AWS wants to tempt customers into switching to Linux - TechRadar

Elevate your security posture and readiness for 2021 – GCN.com

INDUSTRY INSIGHT

For some agencies, the SolarWinds attack was simply a wake-up call. For untold thousands of others, it was a tangible threat to digital assets with the potential for real-world consequences. While only 50 such organizations are thought to be genuinely impacted by the breach -- and the ramifications may be years or decades from full discovery -- it is clear that agencies must strongly reconsider their security posture and organizational readiness in light of the attack.

What does that mean for government IT personnel and related stakeholders? As the people keeping vital information systems safe, the best thing agencies and staff can do is find ways to apply these lessons in day-to-day operations.

The software supply chain matters more than ever

The potential for supply chain attacks and breaches from are far from a new concept, one ComplianceWeek piece noted, but recent examples remind us that attackers can leverage third-party code to directly compromise agency systems. Software supply chain attacks are up more than 400%, pointing to an increasingly attractive avenue of attack.

Also of concern is the practice of using free or open-source tools. While it is tempting to use free solutions, the risk of breach is quite high. By nature, open-source supply chain software is even more vulnerable to compromise by nefarious nation-state-sponsored hackers intent on breaching U.S. homeland defense and public safety organizations.

Organizations prioritizing security should avoid open-source software altogether, and those using prepackaged application programming interfaces and other third-party components must make a stronger commitment to testing, verifying and securing code integrated from outside sources. An initial breach in one system can allow attackers to gain increasing control over time, leapfrog to other systems and ultimately infect those outside the agency via a compromised update.

Agencies must likewise verify the safety of any third-party systems that integrate or use core agency computing or infrastructure systems -- such as a vendors schedule program sending automated update emails over the network -- and confirm the security of the vendors used by their third-party partners as much as possible.

Even within local government, every agencys digital topography will consist of dozens or even hundreds of third-party products, themselves comprised of hundreds more underlying third-party components.

Using guidance from the Federal Risk and Authorization Management Program and Federal Information Security Modernization Act, agencies can conduct a thorough audit of their third-party contractors by asking these questions:

Knowing these answers can make life much easier both during normal operations and in the event of a breach. Strong organizational readiness requires deep knowledge into the systems, processes and organizations with which agencies work.

Move from blacklisting to a whitelisting strategy

Think of blacklisting -- banning malicious or untrustworthy activity -- as a reactive approach to security. In contrast, whitelisting is a proactive strategy that assigns trust to reliable sources instead of revoking trust when things go wrong.

How do things look when an agency approaches security from a trust-giving perspective instead of a trust-taking one? Agencies can model the idea over any number of digital activities, from web traffic to application data to inbound network requests from presumably trustworthy sources.

Embrace the zero-trust model

In a technology environment with so many moving parts, it can be difficult to monitor all suspicious activity. Instead of trying to identify all potentially nefarious actors, consider a zero-trust security model -- a system of governance aligned to the trust-giving perspective. Having caught the IT world by storm, the idea as described by one expert in a CSO piece is quite simple: Cut off all access until the network knows who you are. Dont allow access to IP addresses, machines, etc. until you know who that user is and whether theyre authorized.

In a public-safety context, for example, the concept of inside vs. outside is key. While older castle-and-moat governance styles give a large degree of freedom to devices and users once theyve been permitted past the initial moat, zero trust regards interior users with a consistent level of wariness.

With a castle-and-moat model, hackers can leverage the trust allocated to vendors to compromise agency system more easily -- executing remote commands, sniffing passwords and more. A system that instead requires components to be identified, justified and authenticated at all points is one that can more easily catch compromises and prevent further access. This makes a zero-trust model a serious consideration for IT managers trying to keep operations secure with minimal manual intervention.

Check weak points before its too late

Knowing about potential (or even confirmed) breaches has obvious value and is also a boon for an agencys overall security posture -- understanding weaknesses and points of entry means they can be addressed.

See the article here:

Elevate your security posture and readiness for 2021 - GCN.com

Ryan Abernathey: Helping to Open a Universe of Data to the World – State of the Planet

Ryan Abernathey is a physical oceanographer in the Department of Earth and Environmental Sciences and the Lamont-Doherty Earth Observatory at Columbia University.The Oceanography Society named Abernathey among three recipients of its Early Career Award.

Earths climate system is experiencing unprecedented change as human-made greenhouse gas emissions continue to perturb the global energy balance. Understanding and forecasting the nature of this change, and its impact on human welfare, is both a profound scientific problem and an urgent societal need. Embedded in that scientific task is a technological challenge. New observational technologies are bringing in a flood of new information. Applying data science to this immense stream of information allows science to more deeply explore aspects of climate. However, this astonishing volume of data creates a different challenge: the need for tools that can scale to the size of our ever-expanding datasets.

Unraveling and interpreting that data is of particular fascination to Ryan Abernathey. The physical oceanographer is an associate professor at Columbia Universitys Department of Earth and Environmental Sciences who also leads the Ocean Transport Group at Lamont-Doherty Earth Observatory. His research focuses on the role of ocean circulation in the climate system, particularly mesoscale ocean dynamics the processes that occur at horizontal scales of less than 100 kilometers. A computer modeler as well as a physical oceanographer, Abernathey uses satellite data, computer models, and supercomputing clusters to study the impacts of mesoscale turbulence on the larger circulation of heat, water, and nutrients in the global oceans.

This week, The Oceanography Society named Abernathey among three recipients of its very first Early Career Award. The award recognizes individuals who have demonstrated extraordinary scientific excellence and possess the potential to shape the future of oceanography. The Early Career Award also recognizes individuals who have made significant contributions toward educating and mentoring in the ocean sciences community and/or who have a record of outstanding outreach and/or science communication beyond the scientific community. Abernathey is creating a unique impact In these areas. Below, he discusses the award, his work, the role of big data, and what it all means to future research.

Q: Congratulations, Ryan. You say your work has two parallel threads; how would you describe your objectives?

A: The central mission for our research group is to understand ocean transport, or how stuff moves around in the ocean. By stuff we mean, first and foremost, just the water itself, the ocean currents and the ways those currents transport things we care about. For example, the way they help heat enter the ocean as part of global warming. This matters a lot for the climate and ocean ecosystems. The way we do that is by using two main tools: satellite observations and high-resolution simulations or models. What both of these tools allow us to do is see small-scale ocean processes with more clarity so we can understand them better. And that leads to the data and computing side of our work. We need to see these small-scale processes better. That means we need high quality images with more detail, but that amounts to a much bigger files. These satellite observations/high-resolution images create a whole lot of data to deal with.

Q: Why is it important to get a better understanding of the role of small-scale ocean processes in the ocean?

A: A specific example is phytoplankton. These tiny organisms are the lungs of the ocean; they consume CO2, photosynthesize, and breath out oxygen. But they also need nutrients in order to grow. There is growing evidence that the supply of nutrients from small-scale ocean features, like eddies and fronts, is a really important source of nutrients for these organisms. But the global climate models we use to project future climate change are too coarse to properly represent these features, which means those projections may be missing something. By studying these processes in detail, we can get a sense of what might be missing.

Q: How have you dealt with the problem of having so much data to process that it can overwhelm available computational systems?

A: Ive discovered I just love building tools for working with data and putting them into the hands of as many people as possible, and seeing those people use those tools to do their own research. Thats really satisfying to me, personally. This is not necessarily the most common activity of a scientist. Typically, researchers are expected to produce more and more papers detailing their scientific findings, so this focus on building tools has really been a pivot in my career. Its been incredibly satisfying. Its really kind of a community effort.

Q: Community is a big focus for you. For instance, the work you did to bring about and now lead Pangeo: An Open Source Big Data Climate Science Platform. Why is creating open source code so important to you?

A: I just feel that its a place I can contribute and I like doing it and its going to have a real, broad impact. I think a lot of people recognize the challenge of working with these really large data sets but the unique thing our project brings to the table is a vision for what to do about it, an idea of what the future infrastructure for data and computing will look like for oceanography. Participating in data-intensive research requires a lot of expensive infrastructure, and that is exclusionary. So, theres also a sort of democratizing aspect to what were trying to do to make it possible for anyone, at any institution anywhere in the world, to do this data and computationally intensive research.

Q: Clearly, the award takes into account your specific approach to science. Was that important to you?

A: Im glad the award did cite my work on open software and tools because its something thats traditionally undervalued by the academic reward system. The fact that it can be recognized is a sign of progress. Its not just about publishing papers. Im pleased that this output of mine is recognized. That is indicative of a cultural evolution in the incentive structure in academia.

Q: What is most exciting to you about your work?

A: I love the data. I genuinely love looking at ocean data sets. Particularly, really large complex and beautiful ones that reveal these turbulent ocean processes. On a very aesthetic level, I just love to look at and work with ocean data. Its sort of a unifying thread throughout all this work. The day-to-day motivation is about truth and beauty and these more abstract scientific ideals.

More:

Ryan Abernathey: Helping to Open a Universe of Data to the World - State of the Planet

CTO power panel: Shaping the future of cloud at the edge – SiliconANGLE News

Edge computing is an adolescent market just starting a growth spurt. Its predicted surge from $3.6 billion in 2020 to $15.7 billion by 2025 comes from the enormous diversity of potential use cases. But like any talented teen, edge technology has to decide exactly what it is, where it belongs, and how its going to get there.

Defining edge, its easier to define what it isnt: Its anywhere that youre going to have IT capacity that isnt aggregated into a public or private cloud data center, said John Roese (pictured, left), global chief technology officer of products and operations at Dell Technologies Inc.

The edge is really the place where data is created, processed and/or consumed, saidChris Wolf (pictured, right), vice president of the Advanced Technology Group, Office of the CTO, at VMware Inc., Whats interesting here is that you have a number of challenges in that edges are different. You have all these different use cases. So what were seeing is you cant just say this is our edge platform and go consume it, because it wont work. You have to have multiple flavors of your edge platform.

Wolf and Roese spoke with Dave Vellante and John Furrier, co-hosts of theCUBE, SiliconANGLE Medias livestreaming studio, during theCUBE on Cloud event. They discussed key technology trends that will shape the future of cloud at the edge, including what belongs at the edge, issues with security and latency, and how to define a software framework for the edge.

There may be many use cases for edge, but not all potential uses are productive ones. After a year of testing with customers, Dell has come up with four major reasons why a company should build an edge platform.

The first is latency: If you need real-time responsiveness in the full closed-loop of processing data, you might want to put it in an edge, Roese said.

But then comes the question of defining the real-time responsiveness necessary for each specific use case. The latency around real-time processing matters, Roese stated. Real-time might be one millisecond; it might be 30 milliseconds; it might be 50 milliseconds. If it turns out that its 50 milliseconds, you probably can do that in a colocated data center pretty far away from those devices. [If its] one millisecond, you better be doing it on the device itself.

The second revolves around requirements for data flow. Theres so much data being created at the edge that if you just flow it all the way across the internet youll overwhelm the internet, Roese said. So we need to pre-process and post-process data and control the flow across the world.

The third question on edge relevancy centers on if the use case requires the convergence of information technology and operations technology. The IT/OT boundary that we all know, that was the IoT thing that we were dealing with for a long time, Roese added.

Fourth and potentially most important is security.

[Edge] is a place where you might want to inject your security boundaries because security tends to be a huge problem in connected things, Roese stated, mentioning the security-enabled edge, or as Gartner named it, secure access service edge, aka SASE. If datas everything, the flow of data ultimately turns into the flow of information the knowledge and wisdom and action. If you pollute the data, if you can compromise it at the most rudimentary levels by putting bad data into a sensor or tricking the sensor, which lots of people can do, or simulating a sensor, you can actually distort things like AI algorithms.

Agility is key to edge, with the COVID pandemic demonstrating how companies with established edge platforms were able to react at speed, according to Wolf.

When you have a truly software-defined edge, you can make some of these rapid pivots quite quickly, he said, giving the example of how Vanderbilt University adapted one of its parking garages to a thousand-bed hospital ward. They needed dynamic network and security to be able to accommodate that.

The software behind the edge needs to be open, according to Wolf. We see open source as the key enabler for driving-edge innovation and driving an ISV ecosystem around that edge innovation, he said.

The first step in defining this currently very complex area is to separate edge platforms from the edge workload, according to Roese.

You do not build your cloud, your edge platform co-mingled with the thing that runs on it. Thats like building your app into the OS, and thats just dumb, he stated.

Recognizing that humans are bad when it comes to really complex distributed systems, is also important, according to Roese, who advocates the use of low-code architectures interfaced via APIs through CI/CD pipelines.

What were finding is that most of the code being pushed into production benefits from using things like Kubernetes or container orchestration or even functional frameworks, he said. It turns out that those actually work reasonably well.

This links with VMwares bet on open source as the software of choice for edge. Multiple Kubernetes open-source projects are currently addressing a variety of edge use cases.

Whether its k3s or KubeEdge or OpenYurt or superedge the list goes on and on, Wolf said. However, he points out that Kubernetes is perhaps not the best approach, as it was designed for data center infrastructure not edge computing.

OS projects that take a different approach include open software platform EdgeX Foundry, which is about giving you a PaaS for some of your IoT apps and services, Wolf said, noting that the solution is currently seeing growth in China.

Addressing machine learning at the edge through a federated machine learning model is the open-source FATE project. And Wolf and VMware are laying bets on this approach. We think this is going to be the long-term dominant model for localized machine learning training as we continue to see massive scale-out to these edge sites, Wolf stated.

Dells long-term vision for edge software is that it really needs to be the same code base that were using in data centers and public clouds, according to Roese. It needs to be the same cloud stack, the same orchestration level, the same automation level, he said. Because what youre doing at the edge is not something bespoke. Youre taking a piece of your data pipeline and youre pushing it to the edge, and the other pieces are living in private data centers and public clouds, and youd like them to all operate under the same framework.

Watch the complete video interview below, and be sure to check out more of SiliconANGLEs and theCUBEs coverage of theCUBE on Cloud event.

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

Read more:

CTO power panel: Shaping the future of cloud at the edge - SiliconANGLE News

How the Digital DIY movement thrived in 2020 – TechRadar

Living through an extended period of lockdown this year forced many of us into becoming DIY try-ers, either tackling jobs in the house we would normally avoid or simply trying something new as a way to be creative. For many, this mindset was entirely novel, but for an army of online builders, it was merely an extension of what has already been done previously.

Active before the pandemic, but able to operate through it, the collection of brains behind Digital DIY are continually focused on making the latest applications and software packages. From the smart home to novel robotics, these tinkerers and makers, as they are known, are taking advantage of open source software and online communities to solve a whole host of challenges.

There are two main ways in which lockdown has made people more interested in tinkering. First, being stuck at home has led to more free time; instead of going out and socializing or losing time to commuting, people have had more time to themselves, and so able to take on new hobbies or learn new skills. For the technical and the curious out there, this has turned into capacity for new projects. For some, this will mean using software to solve challenges for the first time, while for those already involved in Digital DIY, they have had more time to get to the end of existing projects, solving the problems they first set out to tackle.

An example of this is the creation of a Raspberry Pi-powered sous chef, which is configured to automate pan-cooking tasks so that the user can focus on more advanced culinary activity. This tinkerer created an Onion Bot to autonomously control the temperature of the pan on the stove, using a PID control system, and is set-up to remind the chef if they havent stirred the pan after a designated time. This type of project is typical of the home-based Digital DIY' we have seen during lockdown, with people taking on more challenges than they have before.

The second reason for the expansion during lockdown is more online socializing. More people are signing up to forums or looking for communities surrounding their hobbies and interests and are involving themselves in these communities. For the tinkerers out there, this has led to a boost in traffic and greater visibility of their pages and discussions. The increase in traffic has led to more contributors, more sign ups and more tutorials. The size of the community has expanded because of this and led to more people getting involved in open source software.

Digital DIY is an umbrella term for countless different projects that people are involved in, whereby anyone can build things or learn about technology. This could be someone learning on their own or by interacting with hundreds of different communities of like-minded, curious, enthusiastic people, who share their passion. This isn't a new movement, because digital creation and tinkering has been around for decades. The availability of affordable hardware, meanwhile, has played a huge part in Digital DIY from the beginning. Hardware like Raspberry Pi, or derivatives of similar use, opened the door to a new realm of digital making. Beforehand, this was done predominantly on web, desktop or mobile apps, with code, where only tech companies with a sizable research and development budget could work on digital device projects.

With the addition of affordable hardware, individuals could suddenly make their own devices, or products, adding a whole new dimension to what digital making was. This makes Digital DIY possible even with something as obscure as e-paper (the material used for Kindle devices), which have been used to create an IoT-controlled message board using Raspberry Pi. This is configured by connecting the e-paper to a Google Docs API, so that it becomes possible for the message board to poll a Google Sheet and update itself whenever there is new data. This is an example of small-scale DIY that can be done within the home and is increasingly possible even without significant budget or hardware.

A key aspect of this is peoples ability to solve problems through open source software and online communities. These communities enable the sharing of support, advice, ideas, and general encouragement. This simplifies the flow of ideas and means people can share their experiences quicker, as more people working on open source code or ideas will result in quicker turn arounds and better solutions.

Countless enterprises have emerged from DIY/tinkering; start-ups appear every year based on one form of digital making or another. Enterprises are realizing the vast untapped potential within tinkering and the tinkering community. Big businesses are good for platforms, for creating, maintaining and supporting the platforms on which tinkerers and digital makers work. The more that big business turns to open source and invests in these tools and platforms, the easier and greater the quality of digital making becomes. Because of this, its easier for people to make their own projects and products, and form communities that carry innovative weight.

See original here:

How the Digital DIY movement thrived in 2020 - TechRadar

Tiger King and Julian Assange miss out on Trump pardons – but plenty of controversial figures make the grade – The Northern Echo

One name missing in US president Donald Trump's flurry of pardons is Tiger King Joe Exotic.

His team was so confident in a pardon that they had readied a celebratory limousine and a hair and wardrobe team to whisk away the zookeeper-turned-reality-TV-star, who is now serving a 22-year federal prison sentence in Texas.

But he was not on the list announced on Wednesday morning.

Undated TV still handout from Tiger King: Murder, Mayhem And Madness. Pictured: Joe Exotic. PA Feature SHOWBIZ Download Reviews. Picture credit should read: PA Photo/Netflix. All Rights Reserved. WARNING: This picture must only be used to accompany PA Fea

Joe Exotic, whose real name is Joseph Maldonado-Passage, was sentenced in January 2020 to 22 years in federal prison for violating federal wildlife laws and for his role in a failed murder-for-hire plot targeting his chief rival, Carole Baskin, who runs a rescue sanctuary for big cats in Florida. Ms Baskin was not harmed.

Maldonado-Passage, who has maintained his innocence, was also sentenced for killing five tigers, selling tiger cubs and falsifying wildlife records. A jury convicted him in April 2019.

In his pardon application filed in September, Maldonado-Passage's lawyers argued that he was "railroaded and betrayed" by others.

Maldonado-Passage, 57, is scheduled to be released from custody in 2037, but his lawyers said in the application that "he will likely die in prison" because of health concerns.

Maldonado-Passage's legal team did not immediately respond to a request for comment early on Wednesday.

The blonde mullet-wearing zookeeper, known for his expletive-laden rants on YouTube and a failed 2018 Oklahoma gubernatorial campaign, was prominently featured in the popular Netflix documentary Tiger King: Murder, Mayhem and Madness.

Another famous face to miss out was Wikileaks founder Julian Assange.

Mr Assanges supporters had hoped Mr Trump would pardon the Wikileaks founder, however he did not make the list.

File photo dated 19/05/17 of Julian Assange, who is not among a round of pardons which US President Donald Trump issued in his final hours in office. Issue date: Wednesday January 20, 2021.

Mr Assanges partner Stella Moris previously said: I urge the (US) Department of Justice to drop the charges and the President of the United States to pardon Julian.

Earlier this month, Mr Assange won his fight to avoid extradition to the United States but was denied bail under strict conditions for fear he could abscond and deny prosecutors the chance to appeal.

The 49-year-old is wanted to face an 18-count indictment, alleging a plot to hack computers and a conspiracy to obtain and disclose national defence information.

The US Government has formally lodged an appeal against the decision to block Mr Assanges extradition.

SO WHO DID MAKE THE CUT?

President Trump pardoned or commuted the sentences of 143 people early on Wednesday, just hours before the inauguration of President-elect Joe Biden.

Among the prominent names to have received a presidential pardon is Mr Trumps former strategist Steve Bannon, US rapper Lil Wayne and Republican fundraiser Elliott Broidy.

Most noteworthy was the pardoning of Mr Bannon.

A statement from the White House said: "Prosecutors pursued Mr Bannon with charges related to fraud stemming from his involvement in a political project.

"Mr Bannon has been an important leader in the conservative movement and is known for his political acumen."

Bannon had been charged with duping thousands of investors who believed their money would be used to fulfil Mr Trump's chief campaign promise to build a wall along the southern border.

Instead, he allegedly diverted over a million dollars, paying a salary to one campaign official and personal expenses for himself.

In August, he was pulled from a luxury yacht off the coast of Connecticut and brought before a judge in Manhattan, where he pleaded not guilty.

Mr Trump has already pardoned a slew of long-time associates and supporters, including his former campaign chairman, Paul Manafort; Charles Kushner, the father of his son-in-law; his long-time friend and adviser Roger Stone; and his former national security adviser Michael Flynn.

Besides Bannon, other pardon recipients included Elliott Broidy, a Republican fundraiser who pleaded guilty last autumn in a scheme to lobby the Trump administration to drop an investigation into the looting of a Malaysian wealth fund, and Ken Kurson, a friend of Trump son-in-law Jared Kushner who was charged last October with cyberstalking during a divorce.

Bannon's pardon was especially notable given that the prosecution was still in its early stages and any trial was months away. Whereas pardon recipients are conventionally thought of as defendants who have faced justice, often by having served at least some prison time, the pardon nullifies the prosecution and effectively eliminates any prospect for punishment.

Wednesday's list also includes rappers Lil Wayne and Kodak Black, both convicted in Florida on weapons charges.

Wayne, whose real name is Dwayne Michael Carter, has frequently expressed support for Mr Trump and recently met with the president on criminal justice issues.

Others on the list included Death Row Records co-founder Michael Harris and New York art dealer and collector Hillel Nahmad.

Other pardon recipients include former Representative Rick Renzi, an Arizona Republican who served three years for corruption, money laundering and other charges, and former Representative Duke Cunningham, who was convicted of accepting 2.4 million dollars in bribes from defence contractors. Cunningham, who was released from prison in 2013, received a conditional pardon.

Mr Trump also commuted the prison sentence of former Detroit mayor Kwame Kilpatrick, who has served about seven years behind bars for a racketeering and bribery scheme.

Bannon - who served in the Navy and worked at Goldman Sachs and as a Hollywood producer before turning to politics - led the conservative Breitbart News before being tapped to serve as chief executive officer of Mr Trump's 2016 campaign in its critical final months.

He later served as chief strategist to the president during the turbulent early days of Trump's administration and was at the forefront of many of its most contentious policies, including its travel ban on several majority-Muslim countries.

But Bannon, who clashed with other top advisers, was pushed out after less than a year. And his split with Mr Trump deepened after he was quoted in a 2018 book making critical remarks about some of Mr Trump's adult children.

Bannon apologised and soon stepped down as chairman of Breitbart. He and Trump have recently reconciled.

Read the original:
Tiger King and Julian Assange miss out on Trump pardons - but plenty of controversial figures make the grade - The Northern Echo

Sting in the Tail: Assange, Extradition and the Protection of Press Freedom Byline Times – Byline Times

Long-time campaigner for whistleblowers and hacktivists, Naomi Colvin, argues that the case of Julian Assange reveals the outdated and illiberal mess of British secrecy laws

It is nice to be proven right, even when it comes as a surprise. After WikiLeaks co-founder Julian Assange was arrested in April 2019, I explained in an article for Byline Times that his extradition to the United States was by no means a foregone conclusion.And so it has transpired.

On 4 January, district judge Vanessa Baraitser surprised the world with a ruling that may rescue Assange on exactly the basis I predicted. In fact, the ruling is the latest result of a decades-long campaign against the excesses of the 2003 UK-US Extradition Treaty, which has spanned the cases of Gary McKinnon, Richard ODwyer and Lauri Love.

Putting aside Brexit for one moment, this movement against extradition holds some claim to being the most effective extra-parliamentary campaign in recent British history.

British-Finnish computer scientist Lauri Love won his appeal against extradition to the US on hacking charges in February 2018, partly because his diagnosis of Asperger syndrome was found to heighten the risk that he would take his own life.

The Love case is important because it embedded the post-McKinnon understanding of the injustices of US extradition into law. In fact, the reasoning in USA v Assange follows the logic and language of the Loves appeal ruling almost exactly.

One of the grounds on which Love won his appeal was the forum bar. This was the change in the law that was introduced by the UKs then Home Secretary Theresa May, after she bowed to public pressure and made the political decision that Gary McKinnon a Scottish systems administrator who was accused in 2002 of the biggest military computer hack of all time would not be extradited to the US. The forum bar allows individuals with ties to the UK to have that factor balanced against arguments in favour of them being sent abroad.

Shortly after the Love ruling in 2018, the banker Stuart Scott won his appeal against extradition to the US on these grounds and other victories have followed. This is not to say that forum arguments always prevail or that abusive proceedings dont happen any more, but the Love precedent has led to some rebalancing of the situation for UK residents.

But Assanges ruling did not rely on the forum bar but on the other way in which Lauri Love won his appeal: that the inadequacy of US prison conditions for those with mental health issues makes extradition an oppressive death sentence.

Julian Assanges ruling gives him a strong basis for seeing off an appeal and will likely save his life, but that is not to say that it makes for comforting reading.

The state of Assanges health should be taken as much as a criticism of the English prison system such as the use of isolation at HMP Belmarsh, which Assange experienced for six months in late 2019 as it is of the American.

Many commentators have focused on the lack of comfort for media freedom advocates in the ruling and it is true that the judge ruled against Assanges defence in all aspects save for the medical evidence.However, the ruling is a pragmatic one that serves a particular purpose that of stopping Assanges extradition to America.

The defence argument in USA v Assange was complicated. Much of it was either politically controversial (that the extradition was initiated and pursued in an illegitimate way for partisan reasons), above the pay grade of a district judge (the 2003 Extradition Act is incompatible with the UK-US treaty), or totally novel in an extradition setting.

Unlike those kinds of arguments, medical evidence and prison conditions are bread-and-butter stuff for a first instance extradition judge and findings of fact at this level are likely to be respected by an appeal court.

The judges take on the medical evidence was therefore always going to be the key part of her ruling. In fact, the US presented it own medical experts in September (something it did not do in the Love case) so it is significant that the judge sided unambiguously with the defence. Providing a strong basis for stopping the extradition happening is almost, by definition, the most important contribution this ruling could have made to press freedom, and it does that.

The defence arguments on freedom of expression fit into the novel for extradition category. Given that the ruling is by a lower court with no precedential value, it is hard to say that it really makes things worse for British journalists. In contrast, what clearly does create difficulties for national security journalists in the US and via extradition abroad is the US indictment, which will stay in place regardless of how the extradition case goes. That is why my employers, Blueprint for Free Speech, are currently campaigning for a full pardon.

The Law Commissions final verdict is that the Official Secrets Act is no longer compatible with human rights standards.

While the ruling does not provide a precedent, the concerns expressed by organisations such as Reporters Without Borders are not groundless because it clearly illustrates the problems with the current English legal framework around investigative journalism. Sections of the ruling betray doubts about the legitimacy of technologically-assisted reporting and the use of large datasets.

A similar logic that technology has made everything terribly difficult was present in theLaw Commissions 2017 consultation report on the Protection of Official Data, its first stab at reviewing the Official Secrets Act. This project was initiated in the wake of official discomfort around the Edward Snowden revelations about mass digital surveillance in the US and Britain, but was quickly disowned. A number of press and civil society organisations made their opposition to UK Espionage Act proposals clear at the time.

In a strange coincidence, the final version of that much-delayed report, taking account of the outburst of criticism, was published just as the September hearings in USA v Assange were getting underway. The Law Commissions final verdict is that the Official Secrets Act is no longer compatible with human rights standards and that journalists and whistleblowers should be able to make apublic interest case in their defence.

Julian Assange being extradited to face prosecution in the US on Espionage Act charges is by far the worst consequence for press freedom that could come out of his case and those who care about these issues should keep a close eye on what happens at the High Court on appeal. But, not only does the first instance ruling provide real hope that a US appeal will be dismissed, it also gives freedom of expression advocates in Britain a road map for what they should be trying to achieve in 2021.

New to Byline Times? Find out about us

Our leading investigations include Russian Interference, Coronavirus, Cronyism and Far Right Radicalisation. We also introduce new voices of colour in Our Lives Matter.

To have an impact, our investigations need an audience.

But emails dont pay our journalists, and nor do billionaires or intrusive ads. Were funded by readers subscription fees:

Or donate to our seasonal crowdfunder to hire an additional journalist to conduct more investigations.

See the original post here:
Sting in the Tail: Assange, Extradition and the Protection of Press Freedom Byline Times - Byline Times