How to use if statements in Python – Android Authority

If statements are among the first things you should learn in any programming language, and are required for pretty much any useful code. In this post, well take a look at how to use if statements in Python, so that you can begin building useful apps!

Once you understand this fundamental feature, youll open up a whole world of possibilities!

If you have never programmed before, then make sure to read the next section to find out precisely what an if statement is, and how to use it.

Also read: How to call a function in Python

If you have coding experience and you just want to know how to use if statements in Python, then read on:

Simply follow the word if with the statement you want to test, and then add a colon. The following code block (all indented text) will run only if the statement is true.

For those with no programming experience, an if statement is a piece of code that is used for flow control. This means that youve created a kind of fork in the road: a point in your program where the flow of events can branch off into two or more paths.

This is essential in any program, as it is what allows a program to interact with the user, or to change dynamically in response to outside factors.

Also read: How to use lists in Python

The if statement in Python does this specifically by testing whether a statement is true, and then executing a code block only if it is.

In other words:

IF this is true, THEN do this.

In a program, this might translate to:

IF the user enters the correct password, THEN grant access.

IF the player has 0 health, THEN end the game.

Now the code can react depending on various factors and inputs, creating an interactive experience for the user!

In order to accomplish this, we must rely on one more advanced concept: the variable. A variable is a word that represents a piece of data. For example, we can say:

This creates a variable called magic_number and gives it the value of seven. This is important, because we can now test if that value is correct.

To do this, we write if and then the statement we want to test. This is called the test statement.

When checking the value of something, we use two equals signs. While this might seem confusing, this actually avoids confusion; we only use a single equals sign when we are assigning value.

After the statement, we add a colon, and then an indentation. All code that is indented after this point belongs to the same code block and will only run if the value is true.

In this example, the words Did you get it right? will show whatever the case. But if you change the value of magic_number to 8 then you wont see The number is correct! on the screen.

Finally, you may also want to combine if statements with else statements. Else does exactly what it sounds like: it tells Python what to do if the value isnt true.

For example, we might want to check someones PIN number:

Here, the else code only runs if the PIN is incorrect. Did you get it right? still shows no matter what happens!

We can also use a similar variation called else if or elif. This means if that thing isnt true, but this other thing is.

For example:

Notice that this example also compares two different variables with one another!

Now you know the basics of how to use if statements in Python, but there are many more things you can do.

For example, you can use different operators to create different test-statements. For instance, the > symbol means bigger than, while < means smaller than.

Thus, we can say: if health is smaller than one, then gameover.

Its also possible to nest ifs and elses by indenting more and more. This way, you can say if this is true then do this but only if that is ALSO true.

Similarly, we can use statements called and and or in order to add multiple test statements.

For example:

Or:

Now you understand how to use if statements in Python, you have a crucial tool under your belt! This will form the backbone of much of your programming, and will help you to run all kinds of logic tests.

So why not take your knowledge further with an online Pythohn course? You can find a list of our favorites to get started with here.

Or, for a more in-depth tutorial right here that explains everything you need to know to start coding in Python, check out our comprehensive Python guide:

The rest is here:
How to use if statements in Python - Android Authority

Control the volume with the mouse scroll wheel when the cursor is over the taskbar – Ghacks Technology News

There are three common ways that most users use to adjust the volume level on a Windows machine. The most popular option is by using the volume slider that's available on the system tray.

If you have a keyboard with multimedia keys, you can use the volume up or volume down keys. The third way is to use the volume wheel or keys on your external speakers.

Not everyone has a keyboard or speaker with dedicated volume control options. Besides, if you're using a multi-monitor setup, you may be aware that Windows does not display the system tray on all screens. So sliding the mouse all the way across to the volume slider can quickly become tiring.

TbVolScroll is a portable software that allows you to control the volume directly from the Windows taskbar. Run the program's executable and an icon appears on the system tray. Ignore it for now. Instead, mouse over the taskbar. Move the scroll wheel up or down, and a volume bar pops-up at the cursor location. It indicates the current audio level in percentage.

Since this is a taskbar program, naturally it will not work in full-screen mode (for e.g. games, video players, etc). The length of TbVolScroll's bar varies depending on the current volume level. If you have the sound maxed out at 100%, the bar will be long. The length reduces as you lower the volume. The color of the bar will change as the volume reaches certain thresholds.

The application modifies the system volume by 5% per scroll. For e.g. If the sound is at 50% and you scroll up once, it will be set to 55%. If you want better control over this, hold the alt key while adjusting the sound. This makes TbVolScroll shift the volume by 1% instead.

Right-click on the TbVolScroll tray icon to access the program's options. Use the Reset Volume option to mute the audio (sets it to 0). The Restart sub-menu has two options, restart will close and reopen the program while the 2nd option restarts it with administrator privileges. The application does not require administrator privileges to run, but using the option may help fix any issues that can prevent it from working. I didn't face a problem with using it normally.

The "Set volume scroll step" option allows you to edit the scroll behavior of TbVolScroll. As I mentioned earlier, it is set to 5% by default, but you can set it to something higher or lower. Customize the toolbar's visuals with the "Set volume bar appearance" option. This opens a new window where you can configure the width and height of the bar. In addition to this, you may choose a different color for the bar from the color palette. Prefer a transparent volume bar? Drag the slider at the bottom of the window to modify the volume bar's opacity. Don't forget to hit the save button after you have edited the settings.

TbVolScroll will automatically switch to the precise volume control (reduces volume by 1% per scroll), when the volume level is lower than 10%. If you would rather have it all the time, use the "Set precise scroll threshold" to 100 and you don't have to use the Alt key while adjusting the volume step, or pick a custom level.

Exit the program from the tray menu when you don't need it.

I almost gave up on the program because it wasn't responding. But then I noticed that the project's page mentioned that the application does not recognize the scrolling behavior when the Windows Task Manager is in focus. I had the window opened (in the background), and though it was not in focus it was causing the issue. TbVolScroll began to work normally when I closed the Task Manager.

TbVolScroll is an open source program. Until Microsoft decides to implement the system tray to be accessed from all monitors, I don't think we aren't going to find a better on-screen option to control the volume.

Author Rating

no rating based on 0 votes

Software Name

TbVolScroll

Operating System

Windows

Software Category

Multimedia

Landing Page

Read the original here:
Control the volume with the mouse scroll wheel when the cursor is over the taskbar - Ghacks Technology News

Ease into autumn with these virtual lecture series and talks – The Architect’s Newspaper

While theres certainly no replacement for the intimacy of an in-person lecture attended by a captivated crowd, there is one distinct upside to having talks, symposiums, and other academic events be held virtually due to the COVID-19 pandemic: the potential for a significantly larger audience unrestrained by pesky practicalities like geographic locale.

With most major architecture schools having fully transitioned their event programming to an online format, their fall 2020 lecture line-ups are now more accessible than ever, allowing participants to attend lectures hosted by said schools by simply signing up and opening a new Zoom window at a designated date and time. Unless otherwise noted, all lectures mentioned here are free and open to the public with advance registration required. Most, but not all, are hosted on Zoom Webinar.

Below are a dozen lecture series scheduled for fall 2020, presented by the likes of Harvard GSD, the University of Southern California, the University of Pennsylvania, and more to get this very different academic season started. While topics vary, the worldand the United States, in particularis a much different place than it was in the fall of 2019 and thats duly reflected in the programming.

AN will continue to add to this list as more lecture series are finalized and announced. Specific dates and times can be confirmed on the events pages of each respective school/program.

The Bernard & Anne Spitzer School of Architecture at the City College of New York

For this years fall lecture series, the Spitzer School of Architecture is trying something a bit different with the new SCIAME Global Spotlight Lecture Series. Titled Far South, the series, curated by Associate Professor Fabian Llonch, presents talks with leading South American architects who, per the school, will discuss their work and the unique political and environmental challenges they face. Among the featured lecturers are Teresa Moller (Chile), Paulo Tavares (Brazil), Diego Arraigada (Argentina),and Patricia Llosa Bueno (Peru).

Carnegie Mellon University School of Architecture

Per the SoA at Carnegie Mellon, the schools fall 2020 lecture series will focus attention on architecture and activism, and the role that architecture can have towards social equity and spatial justice. Scheduled speakers include Mabel O. Wilson (Bulletproofing Americas Public Space: Race, Remembrance and Emmett Till), William Gilchrist (Urban Design as a Catalyst for Environmental Equity), and Toni Griffin (Design and the Just City).

Columbia University Graduate School of Architecture, Planning and Preservation

Launching September 21, the fall 2020 public lecture series at Columbia GSAPP is set to include Tatiana Bilbao, Toshiko Mori, Majora Carter, Stephen Burks, Yasmeen Lari, the Black Reconstruction Collective, and Bryan C. Lee Jr. of Colloqate, among others.

Fay Jones School of Architecture and Design at the University of Arkansas

While additional details are forthcoming, the Fay Jones School of Architecture and Design has added virtual lectures from Sara Jensen Carr, Mira Henry, Marion Weiss and Michael Manfredi, Lesley Lokko, Michelle Joan Wilkinson, and Irene Cheng to its event calendar for fall 2020.

Harvard Graduate School of Design

Kicking off on September 10 with a lecture from Linda Shi, assistant professor at Cornell AAP, on the intersection of social justice and urban flood mitigation, Harvard GSDs roster of fall 2020 public programmingall talks and webinars are held via Zoomalso includes conversations with, among others, Emmanuel Pratt, co-founder and executive director of Chicago nonprofit the Sweet Water Foundation; Edgar Pieterse, director of the African Centre for Cities at the University of Cape Town; and landscape architect Everett L. Fly.

Department of Architecture at the Massachusetts Institute of Technology

While additional details are still forthcoming, MIT Architectures fall 2020 lecture series is slated to include Walter Hood, Derek Ham, Charles Davis II,and Veronica Cedillos.

Rice University School of Architecture

Kicking off on September 2, Rice Architectures fall 2020 lecture series revolves around a central themeRace, Social Justice and Allyshipand includes Zoom-based talks from a range of academics, activists, and architects including Ana Mara Len, Jess Vassallo, and Ilze Wolff and Heinrich Wolff of South Africa-based firm Wolff Architects.

Stuckeman School of Architecture and Landscape Architecture at Penn State University

With on-site events currently on hold, U Penns Stuckeman School of Architecture and Landscape Architecture has opted to livestream its fall 2020 lecture series. Scheduled speakers include Jenny Sabin, professor of architecture at Cornell AAP and founder of experimental architectural design studio Jenny Sabin Studio; Mark Jarzombek, professor of the history and theory of architecture at MIT; and Zrich-based architect and artist Pia Simmendinger.

University of Southern California School of Architecture

The USC School of Architectures fall 2020 virtual lecture series recently commenced with a lecture from Sara Zewde of Harlem-based landscape architecture, public art, and urban design practice Studio Zewde. Upcoming lectures will find architect Michael Maltzan, Yale professor and architectural historian Dolores Hayden, Tokyo-based structural engineer Jun Sato, and others taking the Zoom mic.

The University of Texas at Austin School of Architecture

Described as playing an integral role in fulfilling the schools commitment to fostering intellectual curiosity and the open exchange of ideas, the University of Texas at Austin School of Architectures fall 2020 lecture series will be livestreamed on the schools YouTube channel and touch down on societys most pressing issues, including race and spatial justice, ecology and climate change, computation and the proliferation of new and emerging technologies, and more. Upcoming talks include Peter Eisenman in dialogue with Mario Carpo and a lecture from Oakland, California-based designer, urbanist, and spatial justice activist Liz Ogbu.

Weitzman School of Design at the University of Pennsylvania

Beatriz Colomina, Howard Crosby Butler Professor of the History of Architecture at Princeton University, is slated to give the inaugural talk in the Weitzman School of Designs robust fall 2020 lecture series. As evidenced by its title, Architecture and Pandemics: From Tuberculosis to COVID 19, its a topical one. Other scheduled lectures tackle a wide range of topics outside of the pandemic including Non-Traditional Green Architecture (Michael Webb, cofounder of Archigram) and The Freedom Colony Repertoire: Promising Approaches to Bridging and Bonding Social Capital Between Urban and Rural Black Meccas from Andrea Roberts, assistant professor of Urban Planning at the College of Architecture at Texas A&M University.

Yale School of Architecture

Kate Wagner, Tod Williams and Billie Tsien, Rebecca Choi, and Walter Hood are among those appearing on the calendar for YSOAs Zoom-based fall 2020 lecture series, which kicks off on October 1. Additionally, the first roundtable in an ongoing, open-to-the-public series of discussions organized by the M.E.D. Working Group For Anti-Racism will commence on September 9 with POLICING.

Read more from the original source:
Ease into autumn with these virtual lecture series and talks - The Architect's Newspaper

Investing in VeChain (VET) – Everything You Need to Know – Securities.io

What is Tezos (XTZ)?

Tezos (XTZ) is a fourth-generation blockchain network that incorporates advanced protocols to enable a host of functionalities. Primarily, the platform supports the development of decentralized applications (DApps) and the coding of smart contracts.

Tezos is an open-source decentralized network for assets and applications. Today, the Tezos community consists of researchers, developers, validators, and various support groups. All of these parties share the common goal to expand the Tezos ecosystem.

Tezos history begins in 2014 when co-founders, Arthur Breitman, and Kathleen Breitman began development on their next-generation blockchain. Specifically, the Breitmans sought to simplify Dapp development and create a unique decentralized ecosystem to cater to the needs of the digital economy.

Tezos officially launched in Switzerland in September 2018. Like many other projects in the sector, Tezos utilized a dual company approach. Specifically, Tezos founding company is Dynamic Ledger Solutions (DLS).

Arthur Breitman and Kathleen Breitman

Additionally, the group utilizes a foundation for its fundraising purposes. This non-profit is known as the Tezos Foundation. Importantly, the Tezos Foundation is the company that holds all the operating funds, including the funds collected during the ICO.

Tezos hit the market running. The firm hosted a record-breaking uncapped ICO in 2018. The event was a major success. It secured $232 million in Bitcoin and ether in just under two weeks. The success of the event made international headlines. It also helped propel Tezos further into the spotlight.

Investors received XTZ for their Bitcoin and Ethereum. XTZ, also called tez or tezzie, is a utility token for the Tezos ecosystem. Users can pay for services and execute smart contracts using XTZ. There are 741,546,948 XTZ in circulation currently.

Tezos never announced the total amount of XTZ the platform plans to release. Developers left this open in a bid to ensure that their platform never reaches its capacity in the market. However, some in the space argue that this lack of scarcity hurts the overall value of the coin.

Tezos ICO success was short-lived. Within weeks, the President of Tezos Foundation, Johann Gevers, and the Breitmans got into a public feud regarding the funds raised. Specifically, Gevers refused to disburse the funds to the Breitmans.

The issue was a huge debacle that caused investors to lose faith in the project. This led to the value of XTZ dropping temporarily. Eventually, Gevers left the project, and the funds made it to their destination. However, Gevers made sure to secure a $400,000 severance package for his troubles.

Tezos is unique in the market for a variety of reasons. For one, it utilizes a Liquid proof-of-stakeconsensus mechanism. Also, the platform introduces an agonistic native-middleware known as a Network Shell. This strategy enables developers to utilize modules during the construction of applications.

Tezos is bilingual meaning it utilizes both an Imperative and Functional language. Imperative languages such as Solidity are ideal for smart contract programming in terms of flexibility Whereas, functional languages are more adept at mathematical reasoning, making them more secure.

Tezos (XTZ) Twitter

Tezos uses the combination to ensure its smart contracts are both robust and secure. Notably, the Tezos ecosystem relies on Ocaml for blockchain programming and Michelson for the coding of smart contracts. This strategy also improves transaction speeds across the network.

Currently, XTZ is capable of around 1000 transactions per second (tps). The limit is based on the max allowed gas per transaction. This rate can also increase in the future via voting on protocol changes such as off-chain scaling solutions.

Tezos offers users some features not available to earlier blockchains. To accomplish this task, Tezos combines its transaction and consensus protocols. This strategy streamlines its processes. Crucially, the combination aids in the communication between the network protocol and the blockchain protocol.

The Liquid PoS consensus mechanism is an upgrade to the Delegated Proof-of-Stake systems found in third-generation blockchains like EOS and NEO. In a DPoS, the community votes on who will function as a delegated node.

Importantly, Delegated nodes approve blocks and add the transactions to the blockchain. Additionally, they have a few more rights and responsibilities in the network. Crucially, the number of delegators allowed depends on the bond size minimum requirement. Currently, this limit allows up to around 70,000 delegators.

The LPoS mechanism is exclusive to Tezos at this time. LPoS in Tezos has proven to be very successful. Currently, the network has a stake rate of approximately 80% spread across 450 validators and 13,000 delegators.This makes Tezos one of the most decentralized blockchains in the sector.

The Liquid PoS offers users more control compared to DPoS systems. For one, every user gets a vote. This strategy helps to ensure a more cohesive community. Keenly, users can vote directly or delegate their voting responsibilities to another party.

Additionally, these delegates can then delegate their votes to other delegates via a process known as transitivity. Notably, users can choose to regain their voting rights at any time. They can even change their representatives vote whenever there is a topic in which they disagree with their decision.

The Liquid PoS consensus mechanism provides a balanced and inclusive approach to decentralized network security. Each person has a vote that counts in the final approval of network changes. Best of all, anyone can become a delegate for free. You just need to gain the respect of the community.

To participate in the process a user simply needs to stake their XTZ in a network wallet. In the Tezos ecosystem, this process is called baking. The more XTZ you cake, the better the chances you get to add the next block.

After the block bakes successfully, the network will have 32 random other bakers repeat the process. Once this process is complete, the baker receives a reward. Best of all, the baker gains the ability to charge transaction fees on all the transactions within the block.

The Tezos system mitigates the chance of hard forks via this decentralized voting mechanism. Developers took extra care to ensure that the network has the capabilities to upgrade passively in a decentralized manner via self-amendments. In this way, Tezos seeks to keep its community focused on the same goals.

The voting process begins when a developer submits an upgrade proposal. The proposal must include the technical aspects of the upgrade. Also, it must include the compensation required by the developer for their efforts.

Tezos CoinMarketCap

From here, the protocol will go before the community. The community will test the protocol and give valuable feedback as to its merits. Notably, every protocol undergoes multiple testing periods. In this way, Tezos ensure that only top quality coding makes it onto the blockchain.

Following the completion of the testing period, Tezos token holders can vote on the upgrade directly. If approved, the protocol upgrade will integrate into the network via a protocol called a hot swap. Additionally, the developer will receive compensation from the Tezos Foundation for their efforts at this time.

The Tezos ecosystem provides you with the ability to operate under two different account types. These accounts go by the names Implicit Accounts and Originated Accounts. Critically, these accounts serve different purposes within Tezos infrastructure. In most cases, an Implicit Account will work for basic functionalities.

Implicit Accounts are the type of account that most users posses. These addresses function similar to traditionally crypto accounts. Each Implicit Account includes both public and private keys. Users can check their balance and transfer funds to and from this address.

Originated Accounts are what developers utilize for smart contracts. They differ from Implicit Accounts in a couple of key ways. For one, these accounts always begin with a KT1 versus a tz1. All Originated Accounts include a Manager, Amount, Delegatable, and Delegate Field options.

Tezos is available on most major exchanges today. Binance, the worlds largest exchange, offers multiple Tezos trading pairs. To get started you just need to register for an account. Once your account verification period is over, you can fund your account with fiat currency.

Once your account has funds, it only takes a second to transfer these funds over to Bitcoin or Ethereum. From here, you will want to exchange for XTZ. The entire process can be done in under ten minutes after your account verification completes.

Social Trading

Copy Successful Traders

Biggest Exchange

Fast Executions due to Liquidity

Tokens Offered

Premium Tokens Listed First

Beginners

Simple to Use Trading Platform

Day Traders

150+ Tokens, Fast Software

Advanced Traders

Staking, futures, & more.

10% Cashback

Discount Code: EE59L0QP

Free Airdrops

Exclusive to Securities.io

Storing Tezos is easy. If you are new to the space, you can download a reliable mobile wallet in seconds. The top mobile wallets for this coin are Kukai Tezos Wallet and TezBox Wallet. Both are free to download and provide you with an easy-to-navigate interface.

If you intend to invest significant funds into the project, you should consider a hardware wallet. Manufacturers such as Trezor or Ledger, they both provide high-quality devices for around $100. These devices keep your crypto safely stored offline in cold storage.

Now that Tezos has overcome its inner-company related issues, the firm is ready to take its platform mainstream. Today, Tezos has one of the largest followings in the market. Consequently, you can expect to see Tezos in the top twenty cryptocurrencies for years to come.

See the rest here:
Investing in VeChain (VET) - Everything You Need to Know - Securities.io

Determined to Salvage the Fall, Cabaret Plots Its (Outdoor, Online) Return – The New York Times

The singer and actress Natalie Douglas welcomes increased attention to race, but said, Well have to see going forward how much of it is performative. There are plenty of times where I get an email or release inviting me to an event where there are many performers but not a single person of color, or maybe one.

Cabaret storytelling is often personal, and expanding its viewpoint only makes the art richer, explained Telly Leung, the son of Chinese immigrants. I always find cabaret is best when people can share their own unique stories, and race is a part of that, he said. When you have people paying top dollar to see big names in high-end cabaret, you have to ask yourself why there arent more BIPOC among those big names.

Representing a range of voices is crucial at Dont Tell Mama, which has maintained an outdoor piano bar with a singing bartender since Phase 1 of reopening and added a singing wait staff during Phase 2. Its our mission to be able to house emerging Latinx musical theater voices, said its general manager Joshua Fazeli, citing the drag queen Lagoona Bloo and the nonbinary singer Castrata as examples. Their message is urgent: We are not just here, we are queer and Brown, and we bring substantial value to the cabaret canon.

At Pangea, the veteran performance artist Penny Arcade started developing Invitation to the End of the World PT 2: Notes From the Underground in February, and now considers its title prescient. The situation under Covid is what weve been fearful of since the 60s, she said. We knew if there wasnt a roping in of corporate greed, of governmental disinterest, wed have this kind of epic crisis.

Pangeas The Ghost Light Series, aiming to livestream this fall, will also feature the satirical singer Tammy Faye Starlite channeling the Trump spiritual adviser Paula White-Cain, and the queer song cycle Different Stars: A Reckoning with Time, Trauma and Consequence, for which the performer and composer Karl Saint Lucy crafted a narrative frame casting the Black artist James Jackson Jr. as a character who spends his time in quarantine watching Netflix and reflecting on a breakup.

For Douglas, One benefit of all this is that were finding new ways to be creative, to get our ya-yas out. While, like others, she greatly misses the presence of a live audience, she was inspired shortly before her Birdland session, when she watched the British drag artist La Voix perform an exhilarating concert online. My husband said to me, Do what La Voix did. The audience is there theyre just on the other side of the camera.

View original post here:
Determined to Salvage the Fall, Cabaret Plots Its (Outdoor, Online) Return - The New York Times

How does open source thrive in a cloud world? "Incredible amounts of trust," says a Grafana VC – TechRepublic

Commentary: The shift in the open source industry from infrastructure like Splunk to Elasticsearch comes down to trust, says Gaurav Gupta, a prominent product executive turned investor.

Image: marekuliasz, Getty Images/iStockphoto

Back in 2013 Mike Olson made a bold claim: "No dominant platform-level software infrastructure has emerged in the last ten years in closed-source, proprietary form." Olson is a smart guy, and he was nearly correct except for one small exception to his rule: Splunk. Splunk thrived in spite of its proprietary nature, and leading that success was Gaurav Gupta, then vice president of product at Splunk, and now a partner with Lightspeed Venture Partners. It was a "different time," he said in an interview, both for the industry and for him.

Ever since then he's been building infrastructure the open source way, whether running product at Elastic or later investing in companies like Grafana as a VC. As successful as Splunk was, however, Gupta believes that the "incredible amounts of trust" that open source fosters, coupled with low friction to experimentation, make it the smart investment for today, whether you're a VC or an enterprise trying to innovate your way through a pandemic.

Image: Lightspeed Venture Partners

It's worth dwelling for a moment on Gupta's Splunk experience. Splunk, after all, exploded in adoption at a time when much of the infrastructure world went open source. According to Gupta, Splunk may have slipped into the market just in time. After all, he noted, "Open source didn't exist back then [2004] for the most part." Yes, Linux was around and, yes, things like MySQL and Drupal were taking root, but open source had yet to command the market like it does today.

Splunk was also helped by the fact it catered to a customer (system administrators and similar roles analyzing log data) that was perhaps neither capable nor interested in digging into source code. What this audience did appreciate, by contrast, was an "incredible end-to-end [product] that really focused on great user experience, and traditionally open source hasn't done a great job on user experience [for] less technical audiences." It didn't hurt that "We were the only one in the market for years," Gupta continued.

SEE:How to build a successful developer career (free PDF)(TechRepublic)

By Gupta's reckoning, despite years of VCs trying to fund "copycat" competitors to Splunk, no one successfully did so...until Elastic managed the feat by accident. "Elastic wasn't designed to be a logging company at all, it was a search company." Having left Splunk for Elastic, Gupta and team saw that users were starting to use the search tool for logging use cases, and hired the developers behind Logstash and Kibana to help build out Elastic's log management capabilities. Unlike open source companies before it, Elastic determined to "not be super generic" and instead "create an integrated stack" to target specific use cases like search and logging.

All of which helps to explain how Splunk emerged as a hugely successful proprietary software company in an area of software (infrastructure) that increasingly skewed open source. It also explains how Gupta jumped from proprietary software to open source. But in a world where cloud delivers and, perhaps, perfects many of the benefits of open source ("ultimately people want to consume open source as a service," he said), what is it about open source that makes it fertile ground for investments, decades after open source stopped being novel?

Cloud gives enterprises a "get-out-of-the-burden-of-maintaining-open-source free" card, but savvy engineering teams still want open source so as to "not lock themselves in and to not create a bunch of technical debt." How does open source help to alleviate lock-in? Engineering teams can build "a very modular system so that they can swap in and out components as technology improves," something that is "very hard to do with the turnkey cloud service."

SEE: Linux commands for user management (TechRepublic Premium)

That's the technical side of open source, but there's more to it than that, Gupta noted. Referring to how Elastic ate away at Splunk's installed base, Gupta said, "The biggest reason...is there is a deep amount of developer love and appreciation and almost like an addiction to the [open source] product." This developer love is deeper than just liking to use a given technology: "You develop [it] by being able to feel it and understand the open source technology and be part of a community."

Is it impossible to achieve this community love with a proprietary product? No, but "It's a lot easier to build if you're open source." He went on, "When you're a black box cloud service and you have an API, that's great. People like Twilio, but do they love it?" With open source projects like Grafana and Elasticsearch, by contrast, developers really love the project, he said, because it's more than a project, more than a technology: "As a developer, you want to be part of that movement."

One key aspect of such developer movements isn't a matter of open source code, though that helps. No, it's really about trust.

A lot of it comes from the fact that things are very transparent in these open source companies, their Github repositories, their issues, their roadmaps. [The] majority of the code may be written by the company, but they do a pretty good job of explaining why every single decision is being made, how it's been made, how it's architected.

It's about trust. When developers have to make a big decision, they're making a bet. Maybe they're embedding Elasticsearch, or they're banking their entire operations team on Grafana. They think, 'This is something [we're] going to be stuck with for a while. I'm actually putting my neck on the line to do this.' And so, good open source companies build incredible amounts of trust.

Such trust is paying dividends for open source companies now, with so many companies struggling to do more with less, and so many developers who are "busy, but they also have time on their hands. They're exploring," suggested Gupta, and open source is the lowest-cost software with the least amount of friction to start experimenting...and falling in love with their software.

Disclosure: I work for AWS, but the views herein are mine and don't necessarily reflect those of my employer.

You don't want to miss our tips, tutorials, and commentary on the Linux OS and open source applications. Delivered Tuesdays

Follow this link:

How does open source thrive in a cloud world? "Incredible amounts of trust," says a Grafana VC - TechRepublic

Who is hiring hundreds of new employees and can Israel lead the open-source code revolution? – CTech

Israeli fintech powerhouse Payoneer recruiting 300 new employees globally. Payoneer has benefitted from the Covid-19 pandemic due to the increased demand for online money transfer and digital payment services. Read more

Private micro-mobility companies might finally give cities the innovation they need. CTech spoke with the CEO of Bird Israel on how private companies can help public sectors - to the benefit of millions. Read more

Never trust hyperlinks, says founder of anti-phishing company Segasec. Elad Schulman, co-founder and former CEO of cybersecurity company Segasec, recently acquired by Nasdaq-listed Mimecast, says visually inspecting a URL no longer cuts it, as attackers become more sophisticated by the day. Read more

Israeli chipmaker Hailo launches a Japanese subsidiary. The new launch comes following the news of a recent $60 million series B funding round. Read more

Israeli government approves coronavirus czars traffic light model. According to the approved plan, Israeli towns and regions will be divided into four colored categories, according to the current severity of the outbreak in their territory. Read more

Welltech1 announces $400,000 investment in winner of global wellness startup competition. PopBase is a storybook game that helps kids make healthy life choices; Our portfolio reflects the diversity in the field, says Welltech1 co-founder Galit Horovitz. Read more

Israel Innovation Authority CEO Aharon Aharon resigns. Aharon who was led the government's tech investment arm since 2017 said he felt the job had run its course. Read more

Opinion | Can Israel lead the open-source code revolution? The Israeli tech scene is based on partnerships, innovation and independent thinking which are all vital in open-source code. Read more

Read more from the original source:

Who is hiring hundreds of new employees and can Israel lead the open-source code revolution? - CTech

Announcing the General Availability of Bottlerocket, an open source Linux distribution built to run containers – idk.dev

As our customers increasingly adopt containers to run their workloads, we saw a need for a Linux distribution designed from the ground up to run containers with a focus on security, operations, and manageability at scale. Customers needed an operating system that would give them the ability to manage thousands of hosts running containers with automation.

Meet Bottlerocket, a new open source Linux distribution that is built to run containers. Bottlerocket is designed to improve security and operations of your containerized infrastructure. Its built-in security hardening helps simplify security compliance, and its transactional update mechanism enables the use of container orchestrators to automate operating system (OS) updates and decrease operational costs.

Bottlerocket is developed as an open source project on GitHub with a public roadmap. Were looking forward to building a community around Bottlerocket on GitHub and welcome your feature requests, bug reports, or contributions.

We began designing and building Bottlerocket based on the things weve learned from how customers use Amazon Linux to run containers and from running services such as AWS Fargate. At every step of the design process, we optimized Bottlerocket for security, speed, and ease of maintenance.

Bottlerocket improves security by including only the software needed to run containers, which reduced the security attack surface. It uses Security-Enhanced Linux (SELinux) in enforcing mode to increase the isolation between containers and the host operating system, in addition to standard Linux kernel technologies to implement isolation between containerized workloadssuch as control groups (cgroups), namespaces, and seccomp.

Also, Bottlerocket uses Device-mappers verity target (dm-verity), a Linux kernel feature that provides integrity checking to help prevent attackers from persisting threats on the OS, such as overwriting core system software. The modern Linux kernel in Bottlerocket includes eBPF, which reduces the need for kernel modules for many low-level system operations. Large parts of Bottlerocket are written in Rust, a modern programming language that helps ensure thread safety and prevent memory-related errors, such as buffer overflows that can lead to security vulnerabilities.

Bottlerocket also enforces an operating model that further improves security by discouraging administrative connections to production servers. It is suited for large distributed environments in which control over any individual host is limited. For debugging, you can run an admin container using Bottlerockets API (invoked via user data or AWS Systems Manager) and then log in with SSH for advanced debugging and troubleshooting. The admin container is an Amazon Linux 2 container image and contains utilities for troubleshooting and debugging Bottlerocket and runs with elevated privileges. It allows you to install and use standard debugging tools, such as traceroute, strace, tcpdump. The act of logging into an individual Bottlerocket instance is intended to be an infrequent operation for advanced debugging and troubleshooting.

Bottlerocket improves operations and manageability at scale by making it easier to manage nodes and automate updates to nodes in your cluster. Unlike general-purpose Linux distributions designed to support applications packaged in a variety of formats, Bottlerocket is purpose-built to run containers. Updates to other general-purpose Linux distributions are applied on a package-by-package basis and the complex dependencies among their packages can result in errors, making the process challenging to automate.

Furthermore, general-purpose operating systems come with the flexibility to configure each instance as necessary for its workload uniquely, which makes management that is performed with traditional Linux tools more complex. By contrast, updates to Bottlerocket can be applied and rolled back in an atomic manner, which makes them easy to automate, reducing management overhead and reducing operational costs.

Bottlerocket integrates with container orchestrators to enable the automated patching of hosts to improve operational costs, manageability, and uptime. It is designed to work with any orchestrator, and AWS-provided builds work with Amazon EKS (in General Availability), and Amazon ECS (in preview).

We have launched Bottlerocket as an open source project to enable our customers to make customizations to the operating system (e.g., integration with custom orchestrators/kernels/container runtimes) used to run their infrastructure, submit them for upstream inclusion, and produce custom builds. All design documents, code, build tools, tests, and documentation will be hosted on GitHub. We will use the GitHubs bug and feature tracking systems for project management. You can view and contribute to Bottlerocket source code using standard GitHub workflows. The availability of build, release, and test infrastructure makes it easy to produce custom builds that includes their changes. ISV partners can quickly validate their software before their customers update to the latest versions of Bottlerocket.

We want to grow a vibrant community of users and contributors who adopt and support Bottlerocket as an open source project. We believe that an open source approach enables us to drive innovation based on our experience with working with other open source projects in the container space such as containerd, Linux kernel, Kubernetes, and Firecracker.

Bottlerocket includes standard open source components, such as the Linux kernel, containerd container runtime, etc. Bottlerocket-specific additions focus on reliable updates and an API-based mechanism to make configuration changes and trigger updates/roll-backs. Bottlerocket code is licensed under either the Apache 2.0 license or the MIT license at your option. Underlying third-party code, like the Linux kernel, remains subject to its original license. If you modify Bottlerocket, you may use Bottlerocket Remix to refer to your builds in accordance with the policy guidelines.

Although you can run Bottlerocket as a standalone OS without an orchestrator for development and test use cases (using utilities in the admin container to administer and update Bottlerocket), we recommend using it with a container orchestrator to take advantage of all its benefits.

An easy way to get started is by using AWS-provided Bottlerocket AMIs with either Amazon EKS or Amazon ECS (in preview). You can find the IDs for these AMIs by querying SSM with the AWS CLI as follows.

To find the latest AMI ID for the Bottlerocket aws-k8s-1.17 variant, run:

aws ssm get-parameter --region us-west-2 --name "/aws/service/bottlerocket/aws-k8s-1.17/x86_64/latest/image_id" --query Parameter.Value --output text

To find the latest AMI ID for the Bottlerocket aws-ecs-1 variant, run:

aws ssm get-parameter --region us-west-2 --name "/aws/service/bottlerocket/aws-ecs-1/x86_64/latest/image_id" --query Parameter.Value --output text

In both of the above example commands, you can change the region if you operate in another region, or change the architecture from x86_64 to arm64 if you use Graviton-powered instances.

Once you have this AMI ID, you can launch an EC2 instance and connect it to your existing EKS or ECS cluster. To connect to an EKS cluster with the Kubernetes variant of Bottlerocket, youll need to provide user data, such as the following, when you launch the EC2 instance:

[settings.kubernetes]api-server = "Your EKS API server endpoint here"cluster-certificate = "Your base64-encoded cluster certificate here"cluster-name = "Your cluster name here"

To connect to an ECS cluster with the ECS variant of Bottlerocket, you can provide user data like this:

[settings.ecs]cluster =Your cluster name here

For further instructions on getting started, see the guide for EKS and the guide for ECS.

In addition to using AWS-provided Bottlerocket AMIs, you can produce custom builds of Bottlerocket with your own changes. To do so, you can fork the GitHub repository, make your changes, and follow our building guide. As a prerequisite step, you must first set up your build environment. The build system is based on the Rust language. We recommend you install the latest stable Rust using rustup. To organize build tasks, we use cargo-make and cargo-deny during the build process. To get these, run:

cargo install cargo-makecargo install cargo-deny --version 0.6.2

Bottlerocket uses Docker to orchestrate package and image builds. We recommend Docker 19.03 or later. Youll need to have Docker installed and running with your user account able to access the Docker API. This is commonly enabled by adding your user account to the docker group.

To build an image, run after your source code changes are made:

cargo make

All packages will be built in turn, and then compiled into an img file in the build/ directory.

Next, to register the Bottlerocket AMI, for use on Amazon EC2, you need to set up the aws-cli and run:

cargo make ami

We invite you to join us in further enhancing Bottlerocket. See the Bottlerocket issues list and the Bottlerocket roadmap. We welcome contributions. Going over existing issues is a great way to get started contributing. See our contributors guide for details.

We hope you use Bottlerocket to run your containers and we look forward to your feedback!

See the original post:

Announcing the General Availability of Bottlerocket, an open source Linux distribution built to run containers - idk.dev

Closing the (back) door on supply chain attacks – SDTimes.com

Security has become ever more important in the development process, as vulnerabilities last year caused the 2nd, 3rd and 7th biggest breaches of all time measured by the number of people that were affected.

This has exposed the industrys need for more effective use of security tooling within software development as well as the need to employ effective security practices sooner.

Another factor contributing to this growing need is the prominence of new attacks such as next-generation software supply-chain attacks that involve the intentional targeting and compromising of upstream open-source projects so that attackers can then exploit vulnerabilities when they inevitably flow downstream.

RELATED CONTENT:How does your company help make applications more secure?A guide to security tools

The past year saw a 430% increase in next-generation cyber attacks aimed at actively infiltrating open-source software supply chains, according to the 2020 State of the Software Supply Chain report.

Attackers are always looking for the path of least resistance. So I think they found a weakness and an amplifying effect in going after open-source projects and open-source developers, said Brian Fox, the chief technology officer at Sonatype. If you can somehow find your way into compromising or tricking people into using a hacked version of a very popular project, youve just amplified your base right off the bat. Its not yet well understood, especially in the security domain, that this is the new challenge.

These next-gen attacks are possible for three main reasons. One is that open-source projects rely on contributions from thousands of volunteer developers, making it difficult to discriminate between community members with good or bad intentions. Secondly, the projects incorporate up to thousands of dependencies that may contain known vulnerabilities. Lastly, the ethos of open source is built on shared trust, which can create a fertile environment for preying on other users, according to the report.

However, proper tooling, such as the use of software composition analysis (SCA) solutions, can ameliorate some of these issues. SCA is the process of automating the visibility into open-source software (OSS) for the purpose of risk management, security and license compliance.

DevOps and Linux-based containers, among other factors, have resulted in a significant

increase in the use of OSS by developers, according to Dale Gardner, a senior director and analyst on Gartners Digital Workplace Security team. Over 90% of respondents to a July 2019 Gartner survey indicate that they use open-source software.

Originally, a lot of these [security] tools were focused more on the legal side of open source and less on vulnerabilities, but now security is getting more attention, Gardner said.

The use of automated SCAIn fact, the State of the Software Supply Chain report found that high-performing development teams are 59% more likely to use automated SCA and are almost five times more likely to successfully update dependencies and to fix vulnerabilities without breakage. The teams are more than 26 times faster at detecting and remediating open-source vulnerabilities, and deploy changes to code 15 times more frequently than their peers.

The high-performer cluster shows high productivity and superior risk management outcomes can be achieved simultaneously, dispelling the notion that effective risk management practices come at the expense of developer productivity, the report continued.

The main differentiator between the top and bottom performers was that the high performers had a governance structure that relied much more heavily on automated tooling. The top teams were 96% more likely to be able to centrally scan all deployed artifacts for security and license compliance.

Ideally, a tool should also report on whether compromised or vulnerable sections of code once incorporated into an application are executed or exploitable in practice, Gardner wrote in his report titled Technology Insight for Software Composition Analysis. He added, This would require coordination with a static application security testing (SAST) or an interactive application security testing (IAST) tool able to provide visibility into control and data flow within the application.

Gardner added that the most common approach now is to integrate a lot of these security tools into IDEs and CLIs.

If youre asking developers I need you to go look at this tool that understands software composition or whatever the case may be, that tends not to happen, Gardner said. Integrating into the IDE eliminates some of the friction with other security tools and it also comes down to economics. If I can spot the problem right at the time the developer introduces something into the code, then it will be a lot cheaper and faster to fix it then if it were down the line. Thats just the way a lot of developers work.

Beyond complianceUsing SCA for looking at licenses and understanding vulnerabilities with particular packages are already prominent use cases of SCA solutions, but thats not all that theyre capable of, according to Gardner.

The areas I expect to grow will have to do with understanding the provenance of a particular package: where did it come from, whos involved with building it, and how often its maintained. Thats the part I see growing most and even that is still relatively nascent, Gardner said.

The comprehensive view that certain SCA solutions provide is not available in many tools that only rely on scanning public repos.

Relying on public repos to find vulnerabilities as many security tools still do is no longer enough, according to Sonatypes Fox. Sometimes issues are not filed in the National Vulnerability Database (NVD) and even where these things get reported, theres often a two-week or more delay before it becomes public information.

So you end up with these cases where vulnerabilities are widely known because someone blogged about it, and yet if you go to the NVD, its not published yet, so theres this massive lag, Fox said.

Instead, effective security requires going a step further into inspecting the built application itself to fingerprint whats actually inside an application. This can be done through advanced binary fingerprinting, according to Fox.

The technology tries to deterministically work backwards from the final product to figure out whats actually inside it.

Its as if I hand you a recipe and if you look at it, you could judge a pie or a cake as being safe to eat because the recipe does not say insert poison, right? Thats what those tools are doing. Theyre saying, well, it says here sugar, it doesnt say tainted sugar, and theres no poison in it. So your cake is safe to eat, Fox said. Versus what were doing here is were actually inspecting the contents of the baked cake and going, wait a minute. Theres chromatography that shows that theres actually poison in here, even though the recipe didnt call for it and thats kind of the fundamental difference.

There has also been a major shift from how application security has traditionally been positioned.

Targeting developmentIn many attacks that are happening now, the developers and the development infrastructure is the target. And while organizations are so focused on trying to make sure that the final product itself is safe before it goes to customers and to the server, in the new world, this is irrelevant, according to Fox. The developers might have been the ones that were compromised this whole time, while things were being siphoned out of the development infrastructure.

Weve seen attacks that were stealing SSH keys, certificates, or AWS credentials and turning build farms into cryptominers, all of which has nothing to do with the final product, Fox said. In the DevOps world, people talk a lot about Deming and how he helped make Japan make better, more efficient cars for less money by focusing on key principles around supply chains. Well, guess what. Deming wasnt trying to protect against a sabotage attack of the factory itself. Those processes are designed to make better cars, not to make the factory more secure. And thats kind of the situation we find ourselves in with these upstream attacks.

Now, effective security tooling can capture and automate the requirements to help developers make decisions up front and to provide them information and context as theyre picking a dependency, and not after, Fox added.

Also, when the tooling recognizes that a component has a newly disclosed vulnerability, it can recognize that its not necessarily appropriate to stop the whole team and break all the builds, because not everyone is tasked with fixing every single vulnerability. Instead, its going to notify one or two senior developers about the issue.

Its a combination of trying to understand what it takes to help the developers do this stuff faster, but also be able to do it with the enterprise top-down view and capturing that policy not to be Big Brother-y but to capture the policy so that when youre the developer, you get that instant information about whats going on, Fox said.

Read the original post:

Closing the (back) door on supply chain attacks - SDTimes.com

Vint Cerf: Why everyone has a role in internet safety – ComputerWeekly.com

When Computer Weekly spoke to Vint Cerf, father of the internet, in 2013 at the 40th anniversary of TCP/IP, the protocol he co-wrote with Robert Kahn, he spoke about the challenges facing users arising from the globalisation of the internet.

Today is the age of sharing and, as Cerf points out, sharing tools are now very common. But his concern is that social media amplifies everything, both good and bad. He says: Now we have to tame cyber space.

The internet has become a global collaboration platform, and it was designed that way, says Cerf. The whole story is all about sharing look at Tim Berners-Lee and the worldwide web.

Cerf says the origins of the internet lie in Arpanet in 1969, motivated by a desire by the US Defense Advanced Research Project Agency to stimulate collaboration between artificial intelligence and computer science researchers across universities. Sharing information broadly and collaboration motivated the development of the internet and, by the1980s, Cerf recalls that 3,000 universities were connected. The US Department of Energy and Nasa wanted connectivity and sponsored the research, he adds.

But although it has been rooted in collaboration, the founding principles of the internet are now under threat. There are ongoing trade disputes between countries such as the spat between China and the US, which, if taken to an extreme, could result in one state closing off internet access. Cerf says: People are surprised that the internet can be turned off, but if you shut down the underlying transport mechanisms, the net simply does not work.

The internet may have been born as a platform for global collaboration, but Cerf is worried that it risks being fragmented. Some states, such as Russia and China, are monitoring their internet borders with country firewalls; others, including India, have thrown a switch to turn off the internet, which happened at the end of last year in the Kashmir region, when the state intervened in a bid to curb public unrest.

In 2019, Cerf spoke about the pacification of cyber space when he gave a talk at Oxford University. He argues that fraud, malware and misinformation are now far too commonplace on the internet. Immeasurable harm is happening, he warns. Many people dont feel very safe right now. People may not want to use the net at all for fear of harm, and the net will simply collapse.

Like the major pieces of infrastructure that evolved during the 19th and 20th century, Cerf believes that a legislative framework is now needed. He says: When roads were improved to carry cars, there were very few rules, but eventually it became apparent that people need rules.

He says this tends to happen when policy-makers start to appreciate that peoples behaviour requires management, which leads to legislation. At some point, there will have to be consequences for bad behaviour on the net, he says.

But to succeed, Cerf argues that such legislation will require cooperation across international boundaries, in order to track down people who are exhibiting harmful behaviour and this is not going to be easy.

It will lead to extremes, he says. If you look at the Chinese mechanisms for limiting bad behaviours, they are way off in a direction most US and UK citizens would not want to go. Total anarchy is not very attractive, either. There must be some place in between where behaviour is adequately regulated, so we can feel we are safe.

Today, with the internet of things (IoT), Cerf says: You have many billions of devices interacting with other devices. We are doing billions of experiments with pieces of software that have never seen each other before.

For Cerf, the only reason these things actually work is thanks to internet standards, which is another form of collaboration. Standards really help, he says. They allow interoperability, even if you havent tested a particular combination.

The architecture of the internet is open, he says, which means that if people dont like how it works, it can be changed. The protocols are also open, so people can see how they work.

Open encompasses open protocols, open data and open source, and when asked about the significance of open source, Cerf admits he has mixed views. Open source implementations are open, he says. I like that you can see code, and ingest the code. But I worry that people grab open source code and think there are no bugs. Your eyes should be wide open when you use open source. We find bugs that are 20 to 25 years old. People assume they have all been erased already.

Such bugs lead to security flaws such as Heartbleed, the 2014 bug in the OpenSSL library that wreaked havoc across the internet.

Looking at how to make the internet safe, Cerf says: Transparency is our friend it creates common sense. Safety is a shared responsibility. People have to recognise they are part of the solution to the problem.

For instance, he says, no one should ever click on an attachment that claims to have come from a friend. Instead, they should email the attachment to the friend directly, asking whether it is legitimate.

For Cerf, the HTTPS protocol is a very important mechanism for securing communications. He is also a fan of two-factor authentication for securing online banking and is happy to use an authentication device, even if it is not convenient, because it adds a layer of security against fraudsters. But he adds: I have 300 online accounts and so I need the equivalent of one two-factor authentication device to handle all accounts.

Cerf doesnt trust the use of mobile phone as the second factor of authentication. He says: Mobiles are hackable. The SIM chip can be conned. I have seen server hijacking [attacks] use that technique.

Security of the internet and web is built on layers, but, as Cerf points out, achieving this is hard because it requires third-party trust. He says that third-party trust is a really tough problem to solve, because there are many certificate authorities, some of which have been compromised.

Cerf is also extremely concerned about IoT security. They are cheap devices and the manufacturers dont spend a lot of time on security, he says. To improve IoT security, Cerf says he would like to see public/private key authentication implemented in IoT connectivity.

Today, internet connectivity involves transmitting photons in optical cables at the speed of light between one point on the planet and another. Looking towards the future of internet technology, one of the most compelling areas of research to emerge is the use of quantum mechanics in data communications.

The classic use is in quantum key distribution, says Cerf. The hottest topic is the quantum relay. The idea is to build a network that allows you to transmit photons that are entangled, so that two different quantum machines that are separate from each other can become entangled, so that the computation can happen concurrently.

Cerf says the benefit of a quantum relay is that it gets around the difficulties of building bigger quantum machines reliably, which use more qubits. A quantum relay effectively enables quantum computers to scale horizontally, as Cerf explains: If you can build one quantum machine with enough qubits to do something, what would happen if you then have replicas and pass the quantum state to the other machines, so that you can run them in parallel?

This is the goal of a quantum relay, he says.

See the original post:

Vint Cerf: Why everyone has a role in internet safety - ComputerWeekly.com