Protests against the extradition of Julian Assange in Bonn, Germany – DiEM25

DiEM25 continues to show that democracy is one of its core beliefs by not only covering Julian Assanges extradition trial and pointing out the cases implications for our democracies, but also taking a stand in the streets against this latest attack on the free press by the Trump administration.

As former Brazil President Lula notes, Assanges only crime was exposing war crimes committed by the US military in Iraq and Afghanistan:

No one who believes in democracy can allow someone who provided such an important contribution to the cause of liberty to be punished for doing so. Assange, I repeat, is a champion of democracy and should be released immediately.

John Shipton, Assanges father, spoke about the terrible conditions his son has had to endure during his detention in a recent interview published by the Progressive International; a joint initiative between DiEM25 and the Sanders Institute to form a coalition progressive forces around the world.

The protest took part in front of the citys famous Beethoven monument at noon where multiple people approached our members to find about this attack on the freedom of the press, what has been the response from our governments, and what they could do to help fight this injustice.

One passerby who expressed her support for the release of Julian Assange said she contacted local groups about the extradition trial but was disappointed when she found out they were not taking any actions. After a short discussion with our members, she decided to join us in solidarity (shown in the picture wearing a hat).

One of the organizers of the event, Yunus Arikan, said the following:

Brave whistleblowers and investigative journalists are essential building blocks of our democratic societies. Julian Assange in the world and Myesser Yldz along with OdaTV in Turkey are the best of their classes. And yet, they are both threatened to death, which should be slammed and resisted by all progressives. As the members of the transnational movement DiEM25, I am proud to stand with my comrades in Bonn to call for freedom of Julian Assange, Myesser Yldz and under their name, of all brave whistleblowers and investigative journalists around the world.

Although Julian Assanges trial is a high-profile one with some coverage in the mainstream media, other journalists suffering the same faith barely get any attention. Such is the case of Myesser Yldz, a Turkish journalist, who is currently imprisoned for her investigative work.

If you, like us, believe journalists should be protected to do their jobs without state censorship we invite you to stand in solidarity with us by talking about these injustices with your family and friends and by signing this petition. Thank you.

#FreeAssange

In solidarity,

DSC Bonn

Do you want to be informed of DiEM25's actions? Sign up here

Read more:
Protests against the extradition of Julian Assange in Bonn, Germany - DiEM25

Why Julian Assange, a Non-US Citizen, Operating Outside the US, Is Being Prosecuted Under the Espionage Act – Consortium News

Many people ask how can Julian Assange, an Australian whos never operated in the U.S., be prosecuted under the U.S. Espionage Act. Here is the answer.

Territorial ReachThe 1961Amendment That Imperils Assange

By Joe LauriaSpecial to Consortium News

If the original 1917 Espionage Act were still in force, the U.S. government could not have charged WikiLeaks publisher Julian Assange under it. The 1917 language of the Act restricted the territory where it could be applied to the United States, its possessions and international waters:

The provisions of this title shall extend to all Territories, possessions, and places subject to the jurisdiction of the United States whether or not continguous thereto, and offenses under this title when committed upon the high seas or elsewhere within the admiralty and maritime jurisdiction of the United States

Scarbeck led by FBI agents.

WikiLeaks publishing operations have never occurred in any of these places. But in 1961 Congressman Richard Poff, after several tries, was able to get the Senate t0 repeal Section 791 that restricted the Act to within the jurisdiction of the United States, on the high seas, and within the United States.

Poff was motivated by the case of Irvin Chambers Scarbeck, a State Department official who was convicted under a different statute, the controversial 1950 Subversive Activities Control Act, or McCarran Act, of passing classified information to the Polish government during the Cold War.

(Congress overrode a veto by President Harry Truman of the McCarran Act. He called the Act the greatest danger to freedom of speech, press, and assembly since theAlien and Sedition Laws of 1798, a mockery of the Bill of Rights and a long step toward totalitarianism. Most of its provisions have been repealed.)

Newspaper account of Scarbeck affair.

Polish security agents had burst into a bedroom in 1959 to photograph Scarbeck in bed with a woman who was not his wife. Showing him the photos, the Polish agents blackmailed Scarbeck: turn over classified documents from the U.S. embassy or the photos would be published and his life ruined. Adultery was seen differently in that era.

Scarbeck then removed the documents from the embassy, which is U.S. territory covered by Espionage Act, and turned them over to the agents on Polish territory, which at the time was not.

Scarbeck was found out, fired, and convicted, but he could not be prosecuted under the Espionage Act because of its then territorial limitations. That set Congressman Poff off on a one-man campaign to extend the reach of the Espionage Act to the entire globe. After three votes the amendment was passed.

The Espionage Act thus became global, ensnaring anyone anywhere in the world into the web of U.S. jurisdiction. After the precedent being set by the Assange prosecution, it means that any journalist, anywhere in the world, who publishes national defense information is not safe from an Espionage Act prosecution.

Joe Lauria is editor-in-chief of Consortium News and a former UN correspondent for The Wall Street Journal, Boston Globe, and numerous other newspapers. He was an investigative reporter for the Sunday Times of London and began his professional career as a stringer for The New York Times. He can be reached at joelauria@consortiumnews.com and followed on Twitter @unjoe .

Please Contributeto Consortium News25th Anniversary Fall Fund Drive

Donate securely with

Click on Return to PayPal here.

Or securely by credit card or check by clicking the red button:

Go here to read the rest:
Why Julian Assange, a Non-US Citizen, Operating Outside the US, Is Being Prosecuted Under the Espionage Act - Consortium News

Choosing the Best Programming Language for Your Native App – Dice Insights

So you have a great idea for a native app, but youre not sure about how to build it and where to begin on the development side.

One of the first few questions youll need to ask yourself is which programming language is the best fit for what youre trying to build, and whats the best path?

Lets start with the basics. Native apps are those built for a specific OS. For example, take the difference between a mobile webpage you bring up on a browser and an app such as Instagram that you download to your device. Unlike a web app, a native app gives you the ability to send push notifications and quickly share data from one app to another.

These platform-specific apps interact seamlessly with all other facets of a smartphone or other mobile device, allowing the app to instantly interact with the users camera, microphone, or geolocation. The lattermost example benefits the app-maker, allowing them to customize their offerings and rewards based on location while the user can take advantage of nearby deals or storefronts.

Do you value app-speed on the front end? Something easy to manipulate on the back end? Both? As you familiarize yourself with platforms and surface-level programming, the decisions of how, when, and why to use a specific language become more clear.

Knowing which platform to build your native app upon depends on knowing if your user base tends to congregate on iOS, Android, or both. With a great app idea in mind and an understanding of your target market, you can more confidently shop for languages on a given platform. Lets take a look at some of the most popular ones below:

Objective-C, long considered Apples default language, has been going strong since the 1980s. By virtue of being the standard-bearer in iOS for so long, this all-purpose programming language has an extensive library and is known by almost any Apple developer.

Another major benefit of using Objective-C is its stability. Once you develop your app on the language, you wont need to spend lots of time on updates and new versions. Unfortunately,Apple seems to be shifting away from Objective-C. Its performance is a bit limited and does not include the modern features of newer competitors.

If Objective-C represents the present, Swift is certainly the future of iOS. Apple is clearly trying to make Swift its go-to coding language. As more emphasis gets put on Swift, it should be at the top of any conversation when choosing an iOS language. Simply put, Swift is the new and much faster version of Objective-C.

In addition to a faster development process, other pros of using Swift are its easy scalability and a safety system that prevents code crashing. On the flip side, Swift is still a relatively young language, so its library and resources are limited when compared to Objective-C. Another consideration is there are fewer Swift developers out there when compared to its predecessor; however, thats expected to change in the coming years.

This ever-popular language is especially useful for mobile apps that leverage large amounts of data and/or machine learning. Python is able to easily crunch big packages of data and interpret them for developers. Netflix, Reddit, and Facebook are among the big-name users of Python for these exact reasons.

Although Python was originally meant to be a scripting language, it is one of the most popular languages for native app developers because of its ability to handle enormous datasets. Its also preferred for its extensive third-party library options, which give it an advantage over Swift when working on back-end apps. Another benefit of Python is that its easy to understand, so you have a wide base of developers who can utilize it, and it can be integrated with other popular languages such as Java.

In 2012, RubyMotion was released and challenged Objective-Cs stranglehold over iOS mobile app development. By allowing programmers to use Rubys much-beloved language to create native apps, RubyMotion still provides an interesting alternative to the more popular options listed above.

Technically, RubyMotion can be used as a cross-platform language, but it is routinely used for iOS development. Its known for running very fast and giving developers a variety of testing tools. Since RubyMotion is a cross-platform language, one major downside is that once you write your code in Ruby, you will still need to learn the host API, which will be written in Objective-C or Swift.

The official language of Android is also its most popular. Keep in mind that Java is flexible and can be an option if youre ever interested in developing cross-platform apps. For native app developers, it also has plenty of perks. As Androids default language, it has a wide variety of libraries and a good selection of open-source material to work with. It tends to allow for the faster user experience than other Android languages.

Some of the drawbacks of Java include the fact that its a complicated system to learn and not advised for use among novice coders. In addition, doing simple tasks can feel arduous as an excessive amount of code is needed for relatively minor commands. The more code thats written, the more it can then lead to errors.

One of the main alternatives to Java is Kotlin, an open-source language created in 2011. Kotlin can be an attractive choice over Java because the code writing process is easier, resulting in a shorter, more compressed code (making it less likely to produce errors). Kotlin is also flexible and can be easily converted into Java, as it has access to the same libraries.

A downside of Kotlin is that it tends to be slower than Java overall. Additionally, since its one of the newer Android languages, there is limited help from developers and programmers compared to Java.

Though these languages are considered more complicated than others on this list, C/C++ provides a lot of flexibility. Whether youre looking for a low-level program or something more sophisticated like a graphical user interface (GUI), these languages can do the job. As a compiled language, which well talk more about shortly, it is an extremely fast option for native apps. And thanks to its popularity among developers, there is a huge community readily available as well as countless resources via libraries and compilers.

C/C++ should be avoided if working with beginner programmers because of its sheer complexity. For those that have it mastered, there are tons of positives. It also follows a similar syntax to Java, providing some leeway in the learning curve.

Though Android does not support Lua by itself, the language is often converted to the OS by using an Android Software Development Kit (SDK). It is most commonly used for gaming apps and is recognized as a very fast, high-level language that is relatively easy to use.

Another major upside is that it does not take up much memory and can easily be transferred to the C/C++ languages, which is part of what makes it so useful for Android. Since Lua is not super common, it has limited resources and could require more time for developers to script their own code or fix problems.

Microsofts programming language C# is ideal for Windows apps, but its code can be cross-compiled and run on iOS and Android for native apps. This is thanks to Xamarin. Apart from needing just one base code even when used across platforms, another benefit to C# is there arent any lags or issues with speed. For Android, C# is also often simpler to use than Java because of its straightforward syntax.

However, there is a limited pool of resources and knowledgeable developers that work with Xamarin. Additionally, apps built with Xamarin are normally twice the size as your average native app.

Though HTML is normally reserved for web-based applications, its programming language can be transferred to native apps through third-party software (most notably Apache Cordova). This gives you features and the feel of web browsing on an app.

The fifth revision of HTML is easy to use, making it the perfect programming language for beginners. On paper, you should also save on costs, since youre not required to pay royalties and it can be used across devices. However, if you transition HTML coding to both iOS and Android native apps, youll likely need to pay two different programming teams. Also, be mindful of the overreliance on third parties to make sure your native app is working as it should. When errors in the app occur, it takes valuable bandwidth and time to get those corrected.

Its been five years since Facebook released React Native, which immediately stirred up attention for being a new and promising addition to cross-platform coding languages. The framework allows for native apps to be built on both iOS and Android, and is lauded for its short development time. One obvious advantage is that you only need to make code once and then you can use it for both of the major operating systems. Its also usually cheaper to go this route. Since React Native is based on JavaScript, youll likely only need a JavaScript developer to help implement the language.

Since React Native has an excited young community of developers backing it, its soon to have even more tools at its disposal. Naysayers have complained about apparent bugs that hamper navigation and by virtue of it being a cross-platform language, there is the potential for having any custom modules your developers build end up in a variety of codebases.

Googles UI framework Flutter has developed a new multi-platform language known as Dart. As with other cross-platform languages, Darts appeal comes from the ability to use one codebase that works on iOS, Android, and the web. Dart also comes equipped with an expansive core library, and has a number of useful tools like Dev_complier that can speed up the development process.

Dart also receives high marks for being easy to learn, but it still has a small community and has yet to become as competitive as some of its cross-platform rivals.

With all of these choices, it can be daunting to arrive on the correct decision when picking a language for your native app. Here are some things to consider to help guide you to the correct choice.

Coding languages can vary greatly depending on syntax, typing, and level. When looking at the differences in implementation, there are two distinctions to be made: compiled and interpreted languages.

Compiled languages follow a static process where the program is directly converted into code by its target machine. Interpreted languages require a different program to read and carry out code. The approach here is more line-by-line execution, as opposed to compiled languages that have code manually laid out beforehand.

So what does it all mean? Well, it depends on how you look at it and how much experience you have with both platforms. While compiled languages have the luxury of running faster with fewer problems, theyre not always advantageous. Interpreted languages allow developers more freedom, since going line-by-line means code can be modified while running and it offers dynamic typing that compiled languages dont offer.

Some common examples of compiled languages that weve mentioned are C#, Java, Kotlin, Objective-C, and Swift.

For interpreted languages, the most popular codes include Python, Ruby, PHP, and JavaScript.

One of the most important factors when considering which programming language to use for your native app is the number of resources available for each language. When it comes to resources, youre likely looking for two things: First, you want a language with an established community of programmers you can tap into for help. Second, it is important to find a language with an expansive library of open-source solutions so the programmers you hire dont have to reinvent the wheel when working on every update or error.

Languages with a large community of programmers include Python, Java, and C#. Python is revered by both beginners and more experienced programmers because it is simple, yet has a large range of applications. Since Python boasts a wide variety, from popular mobile apps like Instagram to artificial intelligence, there is a high supply of in-demand programmers.

Some notable languages with a voluminous library of solutions include Java and PHP. Java has around 400,000 different libraries and has received high marks for its wide selection of resources for programmers that work most commonly on Android. However, it can be used across other platforms, as well.

Though these languages work for many people, there may be something special youre looking for when building your native app. The beauty of there being hundreds of programming languages on the market means youre certainly not lacking options.

Once youve narrowed down your target platform for your audience and get a feel of how you want your app to perform, the options become more clear. That puts you one step closer to finding the perfect language for your new native app.

Camilo Usuga is the CTO and Head of Product at Talos Digital.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

See the original post:
Choosing the Best Programming Language for Your Native App - Dice Insights

5 Reasons Python is Still the King of Programming Languages – Dice Insights

Just about every programming language has an ardent fanbase, and Python is no different. Long an extremely popular generalist language, Python has been establishing new fans in ultra-specialist segmentssuch as data science and machine learning. No wonder it regularly ranks so highly on various most popular language lists, including the TIOBE Index, RedMonk, andStack Overflows annual Developer Survey.

If youre new to programming and wondering whether to prioritize the time to learn Python, heres a brief run-through of what developers and other technologists love about the language, along with some advice about adopting it.

Python is the perfect first programming language for beginners,Sebastian Lutter, CTO at Pixolution, told Dice. It provides a clear and readable syntax that makes it easy to learn the fundamentals of programming and allows you to focus on creating solutions for your problems quickly.

Michal Kowalkowski, CEO of NoSpoilers.ai, agreed: Python is easy to learn, even for complete programming beginners. The syntax is simple, and you can master it in a couple of days. Beginners might feel scared when moving from Python to low-level languages like C++, whereas other programmers who start learning Python immediately see its simplicity.

For beginners, picking up any new programming language can be intimidating at first. But like any popular language, Python has a lot of documentation to help you on your way. For example,Python.orgoffers a handybeginners guide to programming and Python.If youre a visual learner, Microsoft has avideo series,Python for Beginners,with dozens of lessons (most under five minutes in length; none longer than 13 minutes).Once youve mastered some of the basics,avariety of tutorials and books(some of which will cost a monthly fee) can help you adopt the language in the context of data analytics and other fields.

Dave Wade-Stein, senior instructor at DevelopIntelligence, added: Python is pithy. One does not have to write a lot of code to get things done. And as a result, programmers can be more productive in Python compared to languages that require a lot of boilerplate code to perform common tasks. In addition, in the DevOps world, where Python is immensely popular, engineers can automate tasks with fewer lines of code, allowing them to focus on further reducing technical debt.

For developers and engineers who are trying to quickly cycle up new projects, that can make Python a good choice of language. Its also important to pay attention to its speed vis--vis other languages, such as Java;heres a helpful breakdown for you.

Kowalkowski also points to Pythons large developer community, which helps the language thrive: Its a popular language with sources on any question you might have. This makes learning simple and enables users to quickly feel like they can do anything with the right kind of help.

As with any language, a robust community is essential for everything from the development of new features to bug-squishing. There aretons of third-party libraries [in Python]for any use case you can think of, Lutter said. You can solve nearby every problem in Python, and you will find a lot of useful libraries from others working on similar problems that will help you to write easily readable and clean code.

Wade-Stein points to Pythons quarter-million packages onpypi.orgas a big reason why the language is so incredibly popular: Its safe to say packages, which in effect expand Python from its originalraison dtreof text processing and manipulation (it was certainly aPerlcompetitor when it first appeared in 1991) into a full-fledged data science powerhouse, are really the drivers for Pythons current popularity.

Sachin Gupta, CEO and co-founder ofHackerEarth, points to his companys 2020 developer survey,which noted that 55 percent of students know Python.

Python is versatile and constantly reinvents itself, he noted, adding that the language allows developers to keep up with trends without having to relearn everything from scratch. Pythons easy integrations with C, C++, and Java as well as its constant updates, keep developers plugged in and up-to-date.

Modus co-founder and Managing PartnerJay Garciapoints to Stack Overflows 2020 developer survey, which reached a similar conclusion as HackerEarth. According toStack Overflows 2020 annual developer survey, Python is3rd among most loved, and 1st wantedprogramming languages, Garcia said. This all nets out to a swath of free and paid resources to train your team and a robust market of skilled engineers to hire if you need to scale your team.

Garcia makes a great point about Python being a hirable skill, and others agree it can help land you jobs.Kowalkowski points out that the incredibly hot data-science market leans heavily into the language: Data scientists often turn to Python for data-related actions due to the sheer number of useful libraries and open-source content. Artificial intelligence is a hot topic, and, under the hood, it relies on data science.

Python instructorTom Taulliemphasized Pythons relevance to machine learning and A.I. When it comes to A.I., the language of choice is Python, he said. It allows for easy scripting for data science projects, and there is the handling of massive amounts of data.Python also has an extensive ecosystem of add-ons, such as forTensorFlow,PyTorchandKeras.

Finally,Reuben Yonatan, founder and CEO at GetVoIP, noted: Big tech companies such as Google, Uber, and Netflix use the language. As a Python developer, it makes it easier to find a job because big tech companies are always looking to add to their pool of skilled developers. That means smaller companies adopt the language, as well, creating lots of opportunities to not only build new products, but also maintain and improve legacy code.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

More here:
5 Reasons Python is Still the King of Programming Languages - Dice Insights

Wikipedia: This new look is our first desktop redesign in 10 years – ZDNet

Wikipedia, the web's 20-year-old crowdsourced encyclopedia, is about to get a new-look desktop interface and its first redesign in a decade.

Launched in January 2001 by Jimmy Wales and Larry Sanger, Wikipedia has become an essential resource for knowledge about anything that contributors believe is worth documenting. The site is currently home to 53 million articles across over 300 languages.

The redesign aims to address what the Wikimedia Foundation admits is "clunky" navigation on the site's desktop interface, which makes it difficult for readers and editors to use.

SEE: Guide to Becoming a Digital Transformation Champion (TechRepublic Premium)

Wikimedia hopes the redesign will attract users who've come to the internet in the past decade without alienating existing users. It also wants to provide a less overwhelming experience and a less confusing side menu.

Some of the key Wikipedia design changes coming include a reconfigured logo, a collapsible sidebar, a repositioned search widget, a new user menu, and a link to articles in different languages in the title bar.

To improve page navigation there's also a new table of contents menu that allows users to skip between different aspects of a person's life, subject, thing or event. Wikimedia has published a series of gifs demonstrating the proposed changes.

"If all goes to plan, these improvements will be the default on all wikis by the end of 2021, timed with Wikipedia's 20th birthday celebrations," writes Olga Vasileva, lead product manager at the Wikimedia Foundation.

The first change due is the collapsible sidebar, which allows users to focus on content, as well as highlight key functionality such as the edit and history buttons, language switching and search.

The second change that Wikimedia is planning introduces a maximum line width to make content easier to read.

The first change to Wikipedia will be the collapsible sidebar, which lets users focus on content and highlight key functionality.

Wikimedia is also working on new designs that reconfigure the logo, seen here.

Read the rest here:
Wikipedia: This new look is our first desktop redesign in 10 years - ZDNet

Baidu offers quantum computing from the cloud – VentureBeat

Following its developer conference last week, Baidu today detailed Quantum Leaf, a new cloud quantum computing platform designed for programming, simulating, and executing quantum workloads. Its aimed at providing a programming environment for quantum-infrastructure-as-a-service setups, Baidu says, and it complements the Paddle Quantum development toolkit the company released earlier this year.

Experts believe that quantum computing, which at a high level entails the use of quantum-mechanical phenomena like superposition and entanglement to perform computation, could one day accelerate AI workloads. Moreover, AI continues to play a role in cutting-edge quantum computing research.

Baidu says a key component of Quantum Leaf is QCompute, a Python-based open source development kit with a hybrid programming language and a high-performance simulator. Users can leverage prebuilt objects and modules in the quantum programming environment, passing parameters to build and execute quantum circuits on the simulator or cloud simulators and hardware. Essentially, QCompute provides services for creating and analyzing circuits and calling the backend.

Quantum Leaf dovetails with Quanlse, which Baidu also detailed today. The company describes Quanlse as a cloud-based quantum pulse computing service that bridges the gap between software and hardware by providing a service to design and implement pulse sequences as part of quantum tasks. (Pulse sequences are a means of reducing quantum error, which results from decoherence and other quantum noise.) Quanlse works with both superconducting circuits and nuclear magnetic resonance platforms and will extend to new form factors in the future, Baidu says.

The unveiling of Quantum Leaf and Quanlse follows the release of Amazon Braket and Googles TensorFlow Quantum, a machine learning framework that can construct quantum data sets, prototype hybrid quantum and classic machine learning models, support quantum circuit simulators, and train discriminative and generative quantum models. Facebooks PyTorch relies on Xanadus multi-contributor project for quantum computing PennyLane, a third-party library for quantum machine learning, automatic differentiation, and optimization of hybrid quantum-classical computations. And Microsoft offers several kits and libraries for quantum machine learning applications.

Read the original post:
Baidu offers quantum computing from the cloud - VentureBeat

Security researchers resolve crypto flaws in JHipster apps – The Daily Swig

John Leyden23 September 2020 at 11:27 UTC Updated: 24 September 2020 at 13:10 UTC

Nearly 4,000 pull requests were issued to fix dependant projects

UPDATED Security researchers have run a successfully exercise to refactor apps that inherited a cryptographic flaw from a vulnerable code generator, JHipster.

Both JHipster and JHipster Kotlin were updated in late June to break their reliance on a weak pseudo-random number generator (PRNG).

The vulnerability meant that an attacker who had obtained a password reset token from a JHipster or JHipster Kotlin generated service would be able to correctly predict future password reset tokens.

This made it possible for an unauthorized third party to request an administrators password reset token in order to take over a privileged account.

Web applications and microservices built using vulnerable version of either JHipster or JHipster Kotlin were not themselves fixed even after the code generating utilities were updated to fixed versions - JHipster 6.3.0 and JHipster Kotlin 1.2.0, respectively.

Software engineer Jonathan Leitschuh estimated in early July that there were as many as 14,600 instances of vulnerable applications generated using vulnerable builds of JHipster on GitHub.

BACKGROUND App generator tool JHipster Kotlin fixes fundamental cryptographic bug

Over the course of 16 hours, 3,880 pull requests were issued to fix instances of CVE-2019-16303, the PRNG vulnerability in the JHipster code generator.

The same underlying vulnerability also affected apps made using JHipster Kotlin.

The root cause of the problem in the case of both JHipster and JHipster Kotlin was reliance on Apache Commons Lang 3 RandomStringUtils to handle PRNGs.

The JHipster app patching exercise, supported by GitHub Security Lab, relied on a code refactoring tool developed by Jon Schneider of source code transformation startup Moderne.

Leitschuh told The Daily Swig: We plan to do this sort of thing again in the future with other vulnerabilities, but hopefully ones that are more complex and less cookie cutter.

JHipster is an open source package thats used to generate web applications and microservices. JHipster Kotlin performs the same functions to generate apps that are compatible with Kotlin, a modern cross-platform programming language.

This story has been updated and revised to reflect that the refactoring exercise focused on JHipster-generated apps and not JHipster, as first and inaccurately reported.

RECOMMENDED Critical XSS vulnerability in Instagrams Spark AR nets 14-year-old researcher $25,000

Continued here:
Security researchers resolve crypto flaws in JHipster apps - The Daily Swig

Communalism: The other virus in India | Opinion – Hindustan Times

Covid 19 is not the only virus stalking the nation. Hate is in the air with social media acting as a super-spreader. Last week, on the death of prominent social activist, Swami Agnivesh, a former Indian Police Service (IPS) officer N Nageswara Rao tweeted: Good riddance.. You were an anti-Hindu donning saffron clothesmy grievance against Yamraj (god of death) is why did he wait so long? After protests from several Twitter users, the social media site pulled down the offensive tweet.

Rao is no ordinary police officer. In 2018, he was appointed acting director of the Central Bureau of Investigation (CBI) and also director-general, Fire Services and Home Guards, before retiring in July. That an officer holding such high posts should make such a spiteful remark is perhaps a sign of the times; those who swear allegiance to the Constitution are now wearing the ideology of hate on their khaki uniform, to the point of wishing death on someone. Worse, Rao has defended his hate speech.

Recall a similar tweet when journalist-activist Gauri Lankesh was shot dead in 2017. Then, a Surat-based businessman, Nikhil Dadhich tweeted: A bitch died a dogs death and now all the puppies are wailing in the same tune! It was a disgusting remark that seemed to celebrate the assassination. It may even have passed unnoticed, but for an inconvenient truth: Dadhich was followed by Prime Minister Narendra Modi on the micro-blogging site. Once again, the individual was unapologetic.

Rao and Dadhich are not alone: There are thousands of anonymous Twitter handles, Facebook posts and WhatsApp groups that are designed to spread animosity between individuals and communities. Under the guise of being open-source platforms, the social media universe has created its own code of conduct where the lines between free speech and hate speech are often blurred.

These are, as an outstanding recent Netflix documentary, The Social Dilemma, puts it, the digital Frankensteins of our times, amoral beasts running amok in a social media jungle where the rules are being subverted to promote hatred and division.

This big tech-driven social media hate-machine is transitioning seamlessly into the news environment. Thus, to blame social media alone for stoking disharmony would be to run away from the nature of the virus. Hate is an infection that is contagious when it is normalised as has happened in recent years. The anti-minority dog-whistles, for example, are now so frequently espoused that their expression is almost seen as routine. The Indian Muslim as anti-national narrative has been deliberately and repeatedly pushed by a section of the power elite so as to acquire a potency of its own. When a rabble-rousing Union minister screams in an election meeting, Desh ke gaddaron ko and the crowd responds with Goli maaron saalon ko, there is little attempt made to rein in the minister. Or indeed when anti-Citizenship (Amendment) Act (CAA)protesters are identified by their clothes or illegal immigrants are referred to as termites, there is a brazen attempt to stoke religious prejudice. It is almost as if a hyper-polarised environment is a spur for incendiary communal rhetoric.

Just how far this normalisation of a narrative of hate and bigotry has travelled is best exemplified by the recent Sudarshan TV case, involving a series of programmes done by the channel to purportedly investigate a Muslim conspiracy to take over the civil services. A slogan UPSC jihad was put out as a promotional video. Rather than acting ab initio against a programme that was prima facie intended to vilify the Muslim community, the information and broadcasting (I&B)ministry allowed the telecast, saying it did not wish to pre-censor the programme. This despite the fact that the I&B programming code allows the ministry to prohibit a programme if it is likely to promote hatred or ill-will between communities. It required the Delhi High Court and then the Supreme Court to step in and stop the further broadcast of the programme before the ministry finally issued the channel a notice. Maybe, the ministry views Sudarshan TV with a more benevolent gaze since the channel is perceived to be in sync with the ruling partys ideology.

But while Sudarshan TV may espouse an unapologetic militant Hindutva worldview, what of those mainstream channels which quietly push a daily drip of communal poison and fake news with the sole objective of demonising a community? Take for example the lynching of two sadhus at Palghar in Maharashtra a few months ago. Some channels projected the killings as a Hindu-Muslim conflict while lining up extremists from both communities in a slugfest that passes as prime time debate. Now, when it turns out that the claims of a communal angle are false and all those arrested are local tribals who mistook the sadhus for kidnappers based on WhatsApp rumours, will any news channel publish an apology for having misled viewers to garner television rating points? Those news traffickers who seek to profit from hate must be acted against swiftly. Only then can we find a vaccine to the virus that threatens to divide us.

Post-script: Since we started with a story of a police officer, let me end with a police officer too. For over a year now, I have been receiving WhatsApp messages from a senior IPS officer echoing the strident Islamophobia which is so prevalent today. The officer was once in charge of a city with a large Muslim population. Is it any surprise then that law-enforcers are often caught on the wrong side of the law when there is a communal riot?

Rajdeep Sardesai is a senior journalist and author

The views expressed are personal

Visit link:
Communalism: The other virus in India | Opinion - Hindustan Times

TIBCO Aims to Drive Faster Adoption of Real-Time Analytics – RTInsights

The new offerings aim to help businesses modernize data management making it available to applications in real-time and manage data as a true business asset.

TIBCO Software today launched an initiative to accelerate the adoption of real-time analytics applications at a time when many organizations are accelerating digital business transformation initiatives to mitigate the impact of the economic downturn brought on by the COVID-19 pandemic.

Announced at an online TIBCO NOW 2020 conference, a TIBCO Cloud Data Streams offering combined with the latest edition of the TIBCO Spotfire analytics application is at the core of a TIBCO Hyperconverged Analytics platform that makes it possible to collect and analyze data in real-time.

See also: TIBCO Software Shares COVID-19 Analytics

At the same time, an existing TIBCO Responsive ApplicationMesh blueprint is being extended to make it simpler to achieve that goal, saysTIBCO CTO Nelson Petracek.

At the core of that framework are a bevy of updates to existingTIBCO offerings, including a Big Basin update to TIBCO Cloud Integration thatadds support for robotic process automation (RPA) capabilities alongside arevamped user interface.

TIBCO has also added TIBCO Cloud Mesh, which makes it simpler for IT teams to create and discover, for example. application programming interfaces (APIs) and integrations in TIBCO Cloud. The company has also updated TIBCO BusinessEvents to provide more contextual processing of events in real-time via integrations with open source Apache Kafka, Apache Cassandra, and Apache Ignite frameworks and databases.

Finally, TIBCO revealed that TIBCO Business ProcessManagement (BPM) Enterprise can now be deployed using containers and launched TIBCOAny Data Hub, a data management blueprint based on TIBCO Data Virtualization softwarethat can be connected to more than 300 data sources.

Data virtualization has emerged as a critical lynchpin for accelerating digital business transformation, says Nelson Petracek, TIBCO CTO. In an ideal world, organizations would centralize all their data within a data lake. However, building data lakes takes time that many organizations dont have as they race to reengineer business processes, notes Petracek. Data virtualization tools enable applications to access data without having to move it, which Petracek says enables IT teams to deploy applications capable of accessing data anywhere it happens to reside faster.

Data virtualization is at the core of those initiatives,says Petracek.

Organizations as they build and deploy these applications are also trying to move beyond batch-oriented processing to provide more responsive application experiences, notes Petracek. While there is currently a lot of focus on application development, Petracek says its also becoming apparent that the way data is managed needs to be modernized as well to make data available to applications in real-time.

Ultimately, organizations are finally moving toward managingtheir data as a true business asset, notes Petracek. The issue is determiningwhat data has the most business value, which in turn drives the digital businessprocesses around which the organization operates, says Petracek.

It may be a while before organizations modernize datamanagement across the entire enterprise. Data virtualization tools, however,clearly have a role to play in jumpstarting that process. The challenge now is figuringout first what data is required to drive a process and then rationalizing all theconflicting data that today resides in far too many application silos.

Excerpt from:
TIBCO Aims to Drive Faster Adoption of Real-Time Analytics - RTInsights

No pixel left behind: The new era of high-fidelity graphics and visualization has begun – VentureBeat

Presented by Intel

Everybody loves rich images. Whether its seeing the fine lines on Thanos villainous face, every strand of hair in The Secret Life of Pets 2, lifelike shadows in World of Tanks, COVID-19 molecules in interactive 3D, or the shiny curves of a new Bentley, demand for vivid, photorealistic graphics and visualizations continues to boom.

Were visual beings, says Jim Jeffers, senior director of Advanced Rendering and Visualization at Intel. Higher image fidelity almost always drives stronger emotions in viewers, and provides improved context and learning for scientists. Better graphics means better movies, better AR/VR, better science, better design, and better games. Fine-grained detail gets you to that Wow!

Appetite for high quality and high performance across all visual experiences and industries has sparked major advances and new thinking about how computer-generated graphics can quickly and efficiently be made even more realistic.

In this interview summary, Jeffers, co-inventor earlier in his career of the NFLs virtual first-down line, discusses the road ahead for a new era of hi-res visualization. His key insights include: a broadening focus beyond individual processors to open XPU platforms, the central role of software, the proliferation of state-of-the-art ray tracing and rendering, and the myth of one size fits all. (Just because GPU has a G in front of it, he says, doesnt mean its good for all graphics functions. Even with ray tracing acceleration, a GPU is not always the right answer for every visual workflow.)

Above: Intels Jim Jeffers

Take a look at some of todays big graphic trends and impacts: Higher fidelity means more objects to render and greater complexity. Huge datasets and an explosion of data require more memory and efficiency. The data explosion is outpacing what todays card memory can address, leading to demand for more system wide efficient memory utilization. AI integration is producing faster results and theres greater collaboration, from edge to cloud.

Theres another new factor: Interactivity. In the past, most data visualization was predominantly used to create static plots and graphs or an offline rendered image or video. This remains valuable today, but for simulations of real-world physics and digital entertainment, scientists and film makers want to interact with the data. They want to drill down to see the detail, turn the visualization around, and get a 360-degree view for better understanding. All that means more real-time operations, which in turn requires more compute power.

Above: A high-speed, interactive visualization of stellar radiation.Image credit: Intel and Argonne National Labs, Simulation provided by University of California/Santa Barbara.

For example, UC Santa Barbara and Argonne National Labs needed to study the temperature and magnetic fluctuations over time of simulated star flares to better understand how stars behave. To visualize that dataset with 3,000 time-steps (frames), each about 10 GBs in size, you need about 3 TB of memory. Considering a current high-end GPU with 24GB of memory, it would require 125 GPUs packed into between 10-15 server platforms to match just one dual socket Intel Xeon processor platform with Intel Optane DC memory that can load and visualize the data. Further, that doesnt even factor in the performance limitations of transferring 3D data over the PCIe bus and the 200-300 Watts of power needed per card in the processor platform they are installed in.

Pretty clearly, a next-gen approach is crucial for producing these rich, high-fidelity, high-performing visualizations and simulations even faster and more simply. New principles are driving state-of-the-art graphics today and will continue to do so.

No transistor left behind. High-fidelity graphics require real-world lighting plus more objects, at higher resolution, to drive compelling photorealism. A virtual room created with one table, a glass, a grey floor with no texture and ambient lighting isnt particularly interesting. Each object and light source you add, down to the dust floating in the air and reflecting light, creates the scene for real life experiences. This level of complexity involves moving, storing, and processing massive amounts of data, often simultaneously. Making this happen requires serious advancements across the computing spectrumarchitecture, memory, interconnect, and software, from edge to cloud. So the first huge shift is to leverage the whole platform, as opposed to a single processing unit. Platform includes all CPUs, GPUs, and potentially has other elements like Intel Optane persistent memory, perhaps FPGAs, as well as software.

A platform can be optimized towards a specialized solution such as product design or the creative arts, but it still uses one core software stack. Intel is actively moving in this direction. Over time, a platform approach allows us to continually deliver an evolutionary path to an XPU era, exascale computing, and open development environments. (More on that in a bit.)

No developer left behind. Handling all this capability and data pouring into the platform is complicated. How does a developer approach that? You have a GPU over here, two CPUs over there, and various specialized accelerators. There might be two individual CPUs at a data center platform, each with 48 cores, and with each core being its own CPU. How do you program that without blowing your mind? Or spending ten years?

Whats needed is a simplified, unified programming model that lets a developer take advantage of all the available hardware capabilities without re-writing code for every processor or platform. Modern, specialized workloads require a variety of architectures as no single platform can optimally run every single workload. We need a mix of scalar, vector, matrix, and spatial architectures (CPU, GPU, AI, and FPGA programmability) along with a programming model that delivers performance and productivity across all the architectures.

Thats what the oneAPI industry initiative and the Intel oneAPI product are about designing efficient, performant heterogeneous programming, where a single code base can be used across multiple architectures. The oneAPI initiative will accelerate innovation with the promise of portable code, provide easier lifts when migrating to new, innovative generations of supported hardware, and helps remove barriers such as single-vendor lock-in.

No pixel left behind. The other key piece of the platform is about open source rendering tools and libraries designed to integrate capabilities and accelerate all this power. High-performance, memory-efficient, state-of-the art tools such as Intels oneAPI Rendering Toolkit open the door to creating the film fidelity visuals not just across films/VFX and animation but also HPC scientific visualization, CAD, content creation, gaming, AR and VR essentially anywhere better images aligned with how our visual system processes them is important.

Ray tracing is especially important in this new picture. If you compare the animated visual effects from a movie ten years ago with a movie today, the difference is amazing. A big reason for this is improved ray tracing. Thats the technique that generates an image by tracing the path of light and then simulates the effects of its encounters with virtual objects to create better pixels. Ray tracing produces more detail, complexity, and visual realism than a typical rasterized scanline rendering.

Compute platforms and tools have been continually evolving to handle larger data sets with more objects and complexity. So, it has become possible to deliver powerful render capabilities that can accelerate all types of workloads: interactive CPU rendering, global illumination with physically based shading and lighting, selective image denoising, and combined volume and geometry rendering. Intels goal is to enable these capabilities to run at all platform scales on laptops, workstations, across the enterprise, HPC, and cloud.

Above: New ray tracing technology provides powerful capabilities far beyond todays GPUs. Expanding model complexity beyond basic triangles to other shapes (above) accelerates rendering and increases accuracy while eliminating pesky inaccurate artifacts. Image credit: Intel

One of the most important new advances is in primitives, or graphics building block shapes. Most products today, especially GPU-based products, are highly attuned to triangles only. Theyre the equivalent of an atom. So if you look at a globe in 3D, theyre showing you a mesh of triangles. Up leveling beyond triangles to other shapes, results in individual objects such as discs, spheres and 3D objects like a globe or hair to require less memory footprint and typically much less processing time than say 1M triangles. Reducing the number of objects and required processing can help you turn your film around faster, to say 12 months instead of 18, achieve higher accuracy and better visual results, and be photorealistic with fewer visible artifacts. These existing ray tracing features plus new ones will take advantage of Intels upcoming XPU platforms with Xe discrete GPUs.

A lot of this is already taking place. Take the example from University of California, Santa Barbara, and Argonne National Labs we mentioned before. Theyre using a ray tracing method called Volumetric Path Tracing to visualize magnetism and other radiation phenomena of stars. Using open-source software, several connected servers, with large random access plus persistent memory, researchers can load and interact (zoom, pan, tilt) with 3+ TB of time series data. That would not have been feasible with a GPU-focused approach.

Film and animation studios have been on the leading edge of this new technology. Tangent Studios, working together with Baozou studios as creators of Next Gen for Netflix, delivered motion blur and key rendering features in Blender with Intel Embree. Theyre now doing renders five to six times faster than before, with higher quality. Laika, a stop-motion animation studio, worked with Intel to create an AI prototype that accelerated the time needed to do image cleanup a painstaking job by 50%.

Above: Bentleys interactive online configurator brings buyers ultra-high res images of 10 billion orderable combinations of autos

In product design and customer experience, Bentley Motors Limited is using these pioneering open-source rendering techniques. Theyre generating, on-the-fly, 3D images of its luxury cars for a custom car configurator. Bentley and Intel demonstrated a prototype virtual showroom where buyers will interactively configure paint colors, wheels, interiors and much more. The prototype included 11 Bentley models rendered accurately with 10 billion possible configuration combinations which used 120-GB of memory per node. The whole platform and ten-server environment ran at 10-20 fps, with hyper-real visuals and interactively with AI based denoising via Intel Open Image Denoise. More on graphics acceleration at Bentley here.

These new approaches come as were on the doorstep of the exascale computational era a quintillion floating-point operations in one second. Creating high-performance systems that deliver those quintillion flops in a consumable way is a huge challenge. But the potential benefits could also be huge.

Think about a render farm effectively a supercomputing data center, likely with thousands of servers, that handle the computing needed to produce animated movies and visual effects. Today, one of these servers works on a single frame for eight, 16, or even 24 hours. Its typical for a 90-minute animated movie to have 130,000 frames. At an average of 12-24 hours of computation per frame, youre looking at between 1.5 and 3 million compute-hours. Not minutes, hours. Thats 171 to 342 compute years! Applying the exascale capabilities now being developed at Intel to rendering with large memory systems, distributed capability, smart software and cloud services could reduce that time dramatically.

Above: Exascale computing could to bring characters to life faster. Image credit: The Secret Life of Pets 2. Illumination Entertainment

Longer term, pouring exascale capability into a gaming platform or even onto a desktop could revolutionize how content gets made. A filmmaker might be able to interactively view and manipulate at 80% or 90% of a movies quality, for example. That would reduce the turn-around time, known as iterations, to get to the final shot. Consumers might have their own vision, and using laptops with such technology, could become creators themselves. Real-time interactivity will further blur the line between movies and games in exciting ways that we can only speculate about today, but ultimately make both mediums more compelling.

NASA Ames researchers have done simulations and visualization with the Intel oneAPI Rendering Toolkit libraries including wind tunnel likes effects on flying vehicles, landing gear, space parachutes, and more. When the visualization team showed their collaborating scientist an initial, basic rasterized visualization without ray tracing effects to check accuracy of the data, the scientist said yes, you are on the right track. and then a week later, with an Intel OSPRay ray traced version. The scientist said: Thats Great! Next time skip that other image, and just show me this more accurate one.

Innovative new platforms with combinations of processing units, interconnect, memory, and software are unleashing the new era of high-fidelity graphics. The picture is literally getting better and brighter and more detailed every day.

Learn more:

High Fidelity Rendering Unleashed (video)

##

Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and theyre always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact sales@venturebeat.com.

Read this article:
No pixel left behind: The new era of high-fidelity graphics and visualization has begun - VentureBeat