The Achilles heel of AI might be its big carbon footprint – Mint

A few months ago, Generative Pre-Trained Transformer-3, or GPT-3, the biggest artificial intelligence (AI) model in history and the most powerful language model ever, was launched with much fanfare by OpenAI, a San Francisco-based AI lab. Over the last few years, one of the biggest trends in natural language processing (NLP) has been the increasing size of language models (LMs), as measured by the size of training data and the number of parameters. The 2018-released BERT, which was then considered the best-in-class NLP model, was trained on a dataset of 3 billion words. The XLNet model that outperformed BERT was based on a training set of 32 billion words. Shortly thereafter, GPT-2 was trained on a dataset of 40 billion words. Dwarfing all these, GPT-3 was trained on a weighted dataset of roughly 500 billion words. GPT-2 had only 1.5 billion parameters, while GPT-3 has 175 billion.

A 2018 analysis led by Dario Amodei and Danny Hernandez of OpenAI revealed that the amount of compute used in various large AI training models had been doubling every 3.4 months since 2012, a wild deviation from the 24 months of Moores Law and accounting for a 300,000-fold increase. GPT-3 is just the latest embodiment of this exponential trajectory. In todays deep-learning centric paradigm, institutions around the world seem in competition to produce ever larger AI models with bigger datasets and greater computation power.

The influential paper, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? by Timnit Gebru and others, was one of the first to highlight the environmental cost of the ballooning size of training datasets. In a 2019 study, Energy and Policy Considerations for Deep Learning in NLP, Emma Strubell, Ananya Ganesh and Andrew McCallum of University of Massachusetts, Amherst estimated that while the average American generates 36,156 pounds of carbon dioxide emissions in a year, training a single deep-learning model can generate up to 626,155 pounds of emissionsroughly equal to the carbon footprint of 125 round-trip flights between New York and Beijing.

Neural networks carry out a lengthy set of mathematical operations for each piece of training data. Larger datasets therefore translate to soaring computing and energy requirements. Another factor driving AIs massive energy draw is the extensive experimentation and tuning required to develop a model. Machine learning today remains largely an exercise in trial and error. Deploying AI models in real-world settingsa process known as inferenceconsumes even more energy than training does. It is estimated that 80-90% of the cost of a neural network is on inference rather than training.

Payal Dhar in her Nature Machine Intelligence article, The Carbon Impact of Artificial Intelligence, captures the irony of this situation. On one hand, AI can surely help reduce the effects of our climate crisis: By way of smart grid designs, for example, and by developing low-emission infrastructure and modelling climate-change predictions. On the other hand, AI is itself a significant emitter of carbon. How can green AI, or AI that yields novel results without increasing computational cost (and ideally reducing it), be developed?

No doubt, industry and academia have to promote research of more computationally efficient algorithms, as well as hardware that requires less energy. The software authors should report training time and computational resources used to develop a model. This will enable a direct comparison across models. But we need to have far more significant pointers to guide the future of AI. A strong contender for this role is the human brain.

Neuromorphic Computing is an emerging field in technology that understands the actual processes of our brain and uses this knowledge to make computers think and process inputs more like human minds do. For example, our brain executes its multi-pronged activities by using just 20 watts of energy. On the other hand, a supercomputer that is not as versatile as a human brain consumes more than 5 megawatts, which is 250,000 times more power than our brain does. Many challenges that AI is attempting to solve today have already been solved by our minds over 300-plus millennia of human evolution. Our brain is an excellent example of few-shot learning, even from very small datasets. By understanding brain functions, AI can use that knowledge as an inspiration or as validation. AI need not reinvent the wheel.

Computational neuroscience, a field of study in which mathematical tools and theories are used to investigate brain function at an individual neuron level, has given us lots of new knowledge on the human brain. According to V. Srinivasa Chakravarthy, author of Demystifying the Brain: A Computational Approach, This new field has helped unearth the fundamental principles of brain function. It has given us the right metaphor, a precise and appropriate mathematical language which can describe brains operation." The mathematical language of Computational Neuroscience makes it very palatable for AI practitioners.

AI has a significant role in building the world of tomorrow. But AI cannot afford to falter on its environment-friendly credentials. Go back to nature is the oft-repeated mantra for eco-friendly solutions. In similar vein, to build AI systems that leave a far smaller carbon footprint, one must go back to one of the most profound creations of naturethe human brain.

Biju Dominic is the chief evangelist, Fractal Analytics, and chairman, FinalMile Consulting

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

See the original post here:
The Achilles heel of AI might be its big carbon footprint - Mint

Artificial intelligence and Renewables: The new frontiers of geopolitics during and after the pandemic – Modern Diplomacy

The Covid-19 pandemic, which since the beginning of last year has affected the entire planet with tragic effects and, due to inertial pressure, seems destined to continue for most of the current year, has not only had very severe effects in terms of general mortality (over 2.5 million deaths to date), but has also generated catastrophic economic and social consequences in many countries of the world, starting with Italy.

As soon as the pandemic crisis is finally over from the health viewpoint, the governments of all affected countries shall necessarily find the right instruments to set the economy again into motion by seeking new opportunities for development and recovery which, if properly seized and implemented, in the next decade could make us live in a better world than the one we left behind.

Last December a think tank of authoritative economists, co-chaired by Professor Mario Draghi, namely the Group of Thirty, published the results of a study entitled Reviving and Restructuring the Corporate Sector Post-Covid: Designing Public Policy Interventions.

The study starts from the observation that the epidemic has dramatically changed business paradigms worldwide, triggering a solvency crisis for companies in many countries.

This is now a structural crisis that requires politicians and governments to find financial support instruments for companies that can restart production and development.

The path indicated by the Group of Thirty is complex, but it starts from the need for politicians to immediately provide proactive support to the private sector companies which have already demonstrate actual resilience abilities, so that the scarce public resources are directed towards sectors that can recover quickly and drive the global economys relaunch.

In this regard, the Group of Thirty recommends that policymakers should carefully consider the allocation of resourceswhich should not be wasted on subsidies to sectors doomed to failure, but rather allocated to sectors that can recover from the crisis quickly and in a socially and economically acceptable manner.

The first sectors identified by the Group of Thirty as deserving immediate support because of their potential to drive recovery are digitalisation and the green economy.

It is therefore no coincidence that in the programme of the Italian government now led by Professor Draghi, the digital revolution and the green economy are top priorities for the strategic interventions to be implemented with the European Recovery Plan funds.

If appropriately matched by public support for smart, intelligent and effective forms of mutual interaction, digitalisation and the green economy can be decisive not only in the post-pandemic recovery, but can also deliver to our children a better, more efficient and healthier world than the one in which we lived before the coronavirus devastated our lives.

The pandemic, however, has hit the whole world regardless of borders, political tensions, regional problems, wars or riots.

It has affected the West and the East, the North and the South, without discrimination between rich and poor. The end of the crisis could therefore provide to politicians the chance for a new start, also under the banner of new forms of solidarity and international cooperation which, besides the Covid-19, will wipe away old-fashioned and anti-cyclical barriers that could severely damage the construction of a better world.

In this regard, it is no coincidence that Pope Francis first international commitment for the year 2021 was to visit the unfortunate Iraq not only to bring solidarity to the Christians persecuted and exterminated by the Caliphate, but above all to build a bridge towards Shiite and Sunni Muslims in the name of their common descent from Abraham.

The Popes meeting with Ayatollah Al Sistani, the highest religious figure in the Shiite world, shows that the possibility of opening up channels of dialogue between political and religious entities separated by centuries of enmity is concrete and feasible, even in view of the post-pandemic renaissance.

Pope Francis message should hopefully also reach the new Catholic President of the United States who, a few weeks after taking office at the White House, showed- in his initial foreign policy moves a superpowers aggressive and revanchist spirit that probably the Americans (and not only them) had hoped would be left behind with the end of Donald Trumps era.

The opening up to Iran matched by bombings of Iranian militias in Iraq, as well as chill in the relations with Saudi Arabia, and unmotivated aggressiveness towards China which has indeed shown the world it has been the first to emerge from the pandemic and has taken on the health support of many African countries are all moves that do not bode well for the search for realistic models of peaceful coexistence on the part of the worlds leading power, namely the United States.

If the worlds recovery from the pandemic is to be driven by science, as hoped by the Group of Thirty, it is precisely in this field that international collaboration should be closer and more effective (as has been the case in the research, production and distribution of vaccines).

A fundamental contribution to scientific progress will certainly come from progress in the field of Artificial Intelligence, a tool designed to support human intelligence, which will be able to accelerate and improve the processes of widespread digitalisation hoped for by many governments, starting with Italys, in the drive for productive recovery.

In the field of Artificial Intelligence, as in vaccine research, there should be no excessive room for the isolationist tendencies that have always damaged science and encouraged illegal espionage.

Electricity was discovered by Edison, but no one could keep it within the United States borders.

Industry has always outstripped politics in its ability to talk (and do business) across borders.

Yet, on March 1, the National Security Commission on Artificial Intelligence, established by President Trump two years ago, published its final report in which it essentially suggested that the President and Congress should use artificial intelligence research as a tool for surrogate warfare against China.

The National Security Commissions report reads as follows: We must engage in competition on artificial intelligence Competition will foster innovation and we must work with our partners to foster progress in this field as in the vaccine sector But we must win the Artificial Intelligence competition by intensifying the strategic confrontation with China. Chinas plans, resources and progress should be of great concern to all Americans. China is second to none in Artificial Intelligence and is even a leader in some of its applications. We recommend that Chinas ambition to overtake the United States in Artificial Intelligence research and become the leader in this field over the next decade be taken seriously.

Therefore, in the words and recommendations of these scientists, scientific progress should be instrumental to the competition for ranking first geostrategically.

Fortunately, serious scientists all over the world cooperate in common research much more than their governments might like, and the same holds true for the companies that are looking for work and growth opportunities even beyond the borders liked by politicians.

Let us take the case of research and development in renewable energy, a fundamental link in the green economy which, according to the suggestions of the Group of Thirty and the European and Italian Recovery projects, should receive public support and drive the economic recovery.

While the American dream of both Trump and Biden is to create a barbed wire fence around China, Europe and Italy have understood that they can and must cooperate with the Eastern giant, starting with the search for clean energy from wind, sun and sea.

Also thanks to the personal commitment of the young Chinese Minister for Energy Resources, Lu Hao, who a few months ago, at the inauguration of the Chinese Expo for Maritime Economy in Shenzhen, stated that China intended to promote the creation of a new development model that would make it possible to understand and manage the dialectic between the protection of the marine ecosystem and the use of the sea as an energy source, in recent weeks the foundations have been laid for collaboration in marine energy research and production between the Italian Eldor Corporation, supported by the International World Group, and the National Ocean Technology Centre in Shenzhen, through the development of devices to obtain energy from wave motion and the hydrogen contained in seawater. If these projects are adequately supported by the governments of Italy, Europe and China, they will provide a fundamental contribution to getting the world out of the crisis quickly and effectively.

With all due respect to those across the Atlantic who have not yet realised that the pandemic crisis also calls for a smart redefinition of the economic frontiers of geopolitics.

Related

Read this article:
Artificial intelligence and Renewables: The new frontiers of geopolitics during and after the pandemic - Modern Diplomacy

Lee’s Famous Recipe using artificial intelligence at drive-thru to combat pandemic induced issues – dayton.com

Our technology is conversational AI. Its basically composed of speech recognition technology that we have that is able to take the audio of your speech in a very noisy environment, said Hi Auto Chief Technical Officer Eyal Shapira.

The company specializes in voice recognition software used in cars and smartphones but has now moved their technology to the drive-thru with conversational artificial intelligence. The technology is able to extract your voice from traffic or other people talking in the car that could otherwise make it difficult to understand what a customer is ordering. The second half of the technology is understanding natural language and is able to get the meaning of what customers want exactly.

Doran said the Englewood location has been significantly impacted by the staffing hardships and would be a good location to adequately test the technology.

Its become increasingly difficult to have people want to work in restaurants for a variety of reasons. The obvious one is the pandemic and potential exposure and having to wear a face mask for eight to nine hours a day is another concern for a lot of people, he said.

With people still using caution when outside more people have opted for drive-thru restaurants to limit their interaction with people inside of businesses which can increase the wait time for customers. Number one expectation of customers when they enter a drive-thru is speed of service. So I viewed this as an opportunity to potentially address that issue, Doran said.

Implementing the technology wont cut hours or payroll for the location and if the employees and customers respond well to the technology Doran said they have plans to use it at at least four more locations.

Read the original here:
Lee's Famous Recipe using artificial intelligence at drive-thru to combat pandemic induced issues - dayton.com

Advantech and the Artificial Intelligence of Things – Automation World

On the opening day of Industrial Internet of Things (IIoT) platform provider Advantechs online conference, company representatives and other industry experts gathered to discuss new developments on the horizon for IIoT, artificial intelligence (AI), and industrial networking. In particular, many sessions focused on the hurdles that still remain if IIoT and associated Industry 4.0 technologies are to see ubiquitous adoption in the future.

The Advantech Connect conference continues online through May 6.

Perhaps the greatest take-away from the first day of the event was that, while the real bedrock of value provided by IIoT is to be found in the data it generates, nothing can be attained from it unless that data is effectively gathered, communicated, and analyzed. As such, several speakers spotlighted burgeoning technologies such as 5G wireless connectivity, intelligent sensors, and AI as the most consequential industry trends going forward. Through the improvements these technologies enable in data gathering, transmission, and analytics, Advantech envisions industry moving beyond IIoT and toward an Artificial Intelligence of Things (AIoT) that allows cloud-delivered applications to make real-time, autonomous decisions at the device level. Within this framework, cloud-based AI trained on large amounts of data can provide industry operators a means of more easily extracting value from their IIoT infrastructure in exchange for furnishing AIoT platforms with the datasets necessary to continue expanding their capabilities.

Allan Yang, chief technology officer at Advantech, stressed the need for a platform approach if AIoT is to be realized in a timely and cost-effective manner. AIoT is cross-disciplinary. It requires edge computing, cloud platforms, data know-how, and domain expertise in many specific areas. No one company can do this alone successfully. However, we have seen many companies that are still trying to build their essential technology modules in-house, rather than adopting a platform approach, he said. This takes a lot of time and involves a lot of trial and error. We strongly encourage all companies, regardless of their size, to evaluate the possibility of collaborating or engaging in a partnership to speed up adoption.

The Future of IIoT

The Advantech event also explored why IIoT adoption rates have not yet met projected expectations, with Dirk Finstel, deputy managing director at Advantech Europe, noting that although 50 billion IIoT devices were expected to be in operation by 2020, only 8.5 billion have been deployed in reality. According to Finstel, much of this can be attributed to shortcomings in the associated infrastructure needed to make large-scale IIoT a reality. He believes that the high speed and bandwidth capacity of 5G networking will improve the feasibility of many IIoT technologies that rely on cloud computing in the near future.

Advances in edge computing are also expected to play a larger role in IIoT deployments by easing the burden of sending large quantities of data in and of out of plants via cloud computing applications, said Jerry OGorman, associate vice president at Advantech North America. Not only does OGorman see edge computing reducing costs and accelerating adoption, but by extending cloud-native software to the edge, latency can be reduced and less bandwidth will be required for data transmission. In fact, he estimated that by extending cloud-native software to the edge, up to 75% of data generated may never need to be sent to the cloud.

He also noted how Software as a Service (SaaS) models are likely to grow in prominence as 5G allows complex applications to be rapidly delivered to the edge. OGorman perceives that this could greatly reduce costs for end-users, making increasingly sophisticated AIoT applications easily accessible even to small-and-medium sized enterprises.

Business considerations

Though AI promises to offer impressive new functionalities, end-users shouldnt expect it to solve all issues surrounding IIoT deployment and integration, said William Webb, author of The Internet of Things Myth, during his presentation at the Advantech event.

Theres a number of promising new developments in this field, but they need to be treated with caution and used in the right way. AI only works when youve got the data in the first place, and that means it can only enhance an IIoT system thats already there and working well, Webb said. Until youve got an IIoT system in place delivering all of the data, you cant really use AI to make sense of that data.

According to Webb, approaching IIoT projects with an eye toward harmoniously adjusting overall business processes may be the best way to ensure success. In numerous early IIoT technology deployments, it was not uncommon for operators to put new systems in place without fully realizing the degree to which they would need to alter their overall operations to efficiently act on insights derived from their data, Webb noted. For example, even when equipment had been outfitted with IIoT technology to allow failures to be predicted in advance, this information could only be used to yield productivity gains once new processes were designed to efficiently allocate labor to maintenance on machines that needed it and redirect it to other valuable activities when they didnt. So, while predictive maintenance is more efficient in theory, without proper systems support, fixed and regular maintenance schedules are more simplistic and easier to keep to in practice.

Of course, operators are shaking out these kinks, and predictive maintenance is now one of the most common applications for IIoT technology. Still, Webb stressed that it is challenges like these that highlight the importance of viewing IIoT projects not only as technological installations, but initiatives that also require cultural, workforce, and business-oriented changes within an organization.

Access registration for future Advantech Connect sessions here.

Here is the original post:
Advantech and the Artificial Intelligence of Things - Automation World

TypeScript Handbook Revamped as Primary Learning Resource – Visual Studio Magazine

News

Microsoft announced a rewrite of its TypeScript Handbook, used as the primary learning resource in the company's home site for its popular, open source programming language that is a superset of JavaScript, introducing the capability to use static types.

Or, for an "official" description of what TypeScript is all about, one can consult the new handbook: "The goal of TypeScript is to be a static typechecker for JavaScript programs - in other words, a tool that runs before your code runs (static) and ensures that the types of the program are correct (typechecked)."

That capability has vaulted the eight-year-old language to the upper pantheon of some indices that track programming language popularity, including this latest ranking from RedMonk.

The latest release, TypeScript 4.2, just shipped last month, with the dev team noting that going forward from there, planned improvements included switching to new handbook.

Several days ago, that was announced with team engineer Orta Therox saying, "In the last year, the TypeScript team has heavily focused on ramping up the scale, modernity and scope of our documentation. One of the most critical sections of our documentation is the handbook, a guided tour through the sort of TypeScript code you'll see in most codebases. We want the handbook to feel like the first recommendation you give for learning TypeScript."

The handbook is accessed from the main TypeScript web site in the docs section, where it's described as "A great first read for your daily TS work."

It's structured so developers can go from start to finish in a few hours by following a left-side navigation pane, after which they should be able to:

The team took a new approach in the rewrite with some constraints that added more focus and resulted in a more approachable product, Therox said, listing these guidelines followed by the team:

The handbook is presented in several formats: Web / Epub / PDF. Any issues can be reported here.

About the Author

David Ramel is an editor and writer for Converge360.

Here is the original post:
TypeScript Handbook Revamped as Primary Learning Resource - Visual Studio Magazine

Mayor Fulop Announces NJ’s 1st Arts & Culture Trust Fund to Generate $1M Annually for Arts Education and Programming Utilizing Community Input -…

Mayor Fulop Announces NJs 1st Arts & Culture Trust Fund to Generate $1M Annually for Arts Education and Programming Utilizing Community Input

Resolution to Reinstate Citys Successful Open Space Trust Fund Goes before City Council Tonight to also Set Rate at Quarter of a Penny

JERSEY CITY Mayor Steven M. Fulopjoins theJersey City Municipal Councilto announce New Jerseys first municipal Arts and Culture Trust Fund will generate $1 million annually in critical long-term funding for Jersey Citys burgeoning arts community. Additionally, the Citys successful Open Space Trust Fund, to which the Arts Trust Fund is being mirrored after, will also bring in a million dollars in tax revenue every year to expand and enhance green park space citywide, based on residents input.

The Arts Fund is being implemented following the November 2020 election where voters largely supported the sustainable funding sources implementation to directly benefit local artists and arts organizations, including youth and community programming, to help them grow and thrive.The City Council will vote tonight to set both tax levy rates at one-quarter of a penny.

Arts and open space are two key quality of life components, especially in urban areas like ours, that have been severely undervalued for far too long. We actively engaged the community, and the voters responded strongly to the need for these responsible revenue streams to strengthen our Citys infrastructure. We can now take the necessary steps to do exactly that, saidMayor Fulop.

The Jersey City Open Space Trust Fund was enacted by the Fulop Administration in 2016. After being put on hold last year amid the extreme financial uncertainty surrounding the pandemic, a resolution will go before City Council today to reinstate the levy.

The Open Space and the Arts Trust Funds received strong support from voters to implement an annual tax not to exceed two cents ($0.02) per one-hundred dollars of assessed property value. Each funding source will bring in approximately $1 million in annual revenue with the implementation of the $.0025 tax levy.

Mayor Fulop spent two years working closely with the Jersey City Arts Council to lobby state legislators to implement the mechanisms that would allow for long-term arts funding. Jersey City was first to take action when the state bill was signed into law by the Governor in December 2019, allowing municipalities to implement an Arts and Culture Trust Fund.

The return on investment in the arts is invaluable to the entire community, not just to artists. Its a powerful tool with social, educational, and economic impacts that will continue to improve all of Jersey City for decades to come. The Arts Trust will generate four times more than what all of Hudson County receives from the State each year to funds arts and cultural programs. Were extremely encouraged by the Mayors partnership with us to see this through after years of advocating together for this critical investment in our City, saidMacadam Smith,Executive Directorof the Jersey City Arts Council.

As part of the administrations commitment to expanding residents access to quality park space citywide, Mayor Fulop recently announced the largest widespread park improvement initiative in decades utilizing over $2 million generated by the Jersey City Open Space Trust Fund. The first allocation of the Open Space Trust Fund is currently updating over 20 parks spanning all six wards based on community input with the historic Reservoir 3 in The Heights being the largest funding recipient.

Access to public park space is proven to improve residents mental and physical health, property values, environmental impacts, community engagement, among other significant benefits.

We created the Open Space Trust Fund Committee to equitably spread significant funding throughout all six wards utilizing community feedback, saidWard B Councilwoman Mira Prinz-Arey. Now we have the potential to create meaningful, long-term support for our arts community, and to ensure we maximize this opportunity, we are using the Open Space Trust Fund and the Open Space Trust Fund Committee as a template to navigate these uncharted waters with the hopes of encouraging others to follow suit.

(Visited 13 times, 13 visits today)

See original here:
Mayor Fulop Announces NJ's 1st Arts & Culture Trust Fund to Generate $1M Annually for Arts Education and Programming Utilizing Community Input -...

RedMonk Ranks Programming Languages Using GitHub and StackOverflow — ADTmag – ADT Magazine

RedMonk Ranks Programming Languages Using GitHub and StackOverflow

Programming language rankings get regular headlines, and they should, at least from trend trackers like us. Among my favorite is the RedMonk quarterly, published this week. I like the methodology of their system, which extracts data from GitHub and Stack Overflow and combines them for "a ranking that attempts to reflect both code (GitHub) and discussion (Stack Overflow) traction."

In other words, it correlates what the cool kids are talking about with actual language usage "in an effort to extract insights into potential future adoption trends." It's a mix that makes it meaningful.

The latest ranking was posted by veteran industry analyst Stephen O'Grady on his RedMonk blog. "GitHub and Stack Overflow are used here, first, because of their size, and second, because of their public exposure of the data necessary for the analysis," he wrote.

O'Grady's post includes thoughtful observations about JavaScript, TypeScript, Ruby, Go, R, Kotlin, Rust, and Dart.

There's quite a lot of change afoot in the programming world, the analysts found, but constant has been the rise of Python, which has maintained its top ranking ahead of Java. "Java was extremely hot on Python's heels and was in fact closer to the number one ranking than to PHP behind it but Python's ability to defend its new high ranking is notable," O'Grady wrote.

Half of the Top 20 languages experienced "a degree of movement," O'Grady added, "which is very unusual. It's difficult to attribute this definitively to any higher level macro trends, but the data is consistent with an industry that picked the pace back up in the last two quarters of the year after the initial chaos of lockdowns and so on gave way to livable if extremely suboptimal new routines."

JavaScript is holding its own in the rankings. "[I]t is worth noting just how robust JavaScript's performance remains," O'Grady observed. "In spite of all of the competition from up and coming languages, all the discussion of fragmentation and even criticisms of JavaScript the language itself, it remains remarkably popular.

JavaScript pull requests are up 453% since Q1 of January of 2018, and they were up 96% from the last quarter "on an already massive base of commits." '

'Simply put, JavaScript remains its detractors notwithstanding a force of nature like no other within the industry," he wrote, "and there are no indications in the data that this is likely to change any time soon."

TypeScript, which is a superset of JavaScript, moved up for the sixth of its latest eight quarterly RedMonk rankings, "and its popularity is evident when one looks around the industry." Ruby is on a gentle long-term downward trajectory, the analysts found. Go is slipping, too. R, a language for statistical computing and graphics, appears to be on a slow upswing. Both Kotlin and Rust showed signs of growing popularity. And Dart, an open source, purely object-oriented, optionally typed, and a class-based language, has risen since the advent of the Flutter framework.

Th RedMonk report surrounds a cool plotting of the language rankings with detailed analysis of key trends over the past quarter. As far as I'm concerned, it's a must read.

Posted by John K. Waters on 03/04/2021 at 9:18 AM

The rest is here:
RedMonk Ranks Programming Languages Using GitHub and StackOverflow -- ADTmag - ADT Magazine

Jupyter has revolutionized data science, and it started with a chance meeting between two students – TechRepublic

Commentary: Jupyter makes it easy for data scientists to collaborate, and the open source project's history reflects this kind of communal effort.

Image: iStockphoto/shironosov

If you want to do data science, you're going to have to become familiar with Jupyter. It's a hugely popular open source project that is best known for Jupyter Notebooks, a web application that allows data scientists to create and share documents that contain live code, equations, visualizations and narrative text. This proves to be a great way to extract data with code and collaborate with other data scientists, and has seen Jupyter boom from roughly 200,000 Notebooks in use in 2015 to millions today.

Jupyter is a big deal, heavily used at companies as varied as Google and Bloomberg, but it didn't start that way. It started with a friendship. Fernando Prez and Brian Granger met the first day they started graduate school at University of Colorado Boulder. Years later in 2004, they discussed the idea of creating a web-based notebook interface for IPython, which Prez had started in 2001. This became Jupyter, but even then, they had no idea how much of an impact it would have within academia and beyond. All they cared about was "putting it to immediate use with our students in doing computational physics," as Granger noted.

Today Prez is a professor at University of California, Berkeley, and Granger is a principal at AWS, but in 2004 Prez was a postdoctoral student in Applied Math at UC Boulder, and Granger was a new professor in the Physics Department at Santa Clara University. As mentioned, they first met as students in 1996, and both had been busy in the interim. Perhaps most pertinently to the rise of Jupyter, in 2001 Prez started dabbling in Python and, in what he calls a "thesis procrastination project," he wrote the first IPython over a six-week stretch: a 259-line script now available on GitHub ("Interactive execution with automatic history, tries to mimic Mathematica's prompt system").

SEE:Top 5 programming languages for data scientist to learn (free PDF)(TechRepublic)

It would be tempting to assume this led to Prez starting Jupyter--it would also be incorrect. The same counterfactual leap could occur if we remember that Granger wrote the code for the actual IPython Notebook server and user interface in 2011. This was important, too, but Jupyter wasn't a brilliant act by any one person. It was a collaborative, truly open source effort that perhaps centered on Prez and Granger, but also people like Min Ragan-Kelley, one of Granger's undergraduate students in 2005, who went on to lead development of IPython Parallel, which was deeply influential in the IPython kernel architecture used to create the IPython Notebook.

However we organize the varied people who contributed to the origin of Jupyter, it's hard to get away from "that one conversation."

In 2004 Prez visited Granger in the San Francisco Bay Area. The old friends stayed up late discussing open source and interactive computing, and the idea to build a web-based notebook came into focus as an extension of some parallel computing work Granger had been doing in Python, as well as Prez's work on IPython. According to Granger, they half-jokingly talked about these ideas having the potential to "take over the world," but at that point their idea of "the world" was somewhat narrowly defined as scientific computing within a mostly academic context.

Years (and a great deal of activity) later, in 2009, Prez was back in California, this time visiting Granger and his family at their home in San Luis Obispo, where Granger was now a professor. It was spring break, and the two spent March 21-24 collaborating in person to complete the first prototype IPython kernel with tab completion, asynchronous output and support for multiple clients.

By 2014, after a great deal of collaboration between the two and many others, Prez, Granger and the other IPython developers co-founded Project Jupyter and rebranded the IPython Notebook as the Jupyter Notebook to better reflect the project's expansion outwards from Python to a range of other languages including R and Julia. Prez and Granger continue to co-direct Jupyter today.

"What we really couldn't have foreseen is that the rest of the world would wake up to the value of data science and machine learning," Granger stressed. It wasn't until 2014 or so, he went on, that they "woke up" and found themselves in the "middle of this new explosion of data science and machine learning." They just wanted something they could use with their students. They got that, but in the process they also helped to foster a revolution in data science.

How? Or, rather, why was it that Jupyter has helped to unleash so much progress in data science? Rick Lamers explained:

Jupyter Notebooks are great for hiding complexity by allowing you to interactively run high level code in a contextual environment, centered around the specific task you are trying to solve in the notebook. By ever increasing levels of abstraction data scientists become more productive, being able to do more in less time. When the cost of trying something is reduced to almost zero, you automatically become more experimental, leading to better results that are difficult to achieve otherwise.

Data science is...science; therefore, anything that helps data scientists to iterate and explore more, be it elastic infrastructure or Jupyter Notebooks, can foster progress. Through Jupyter, that progress is happening across the industry in areas like data cleaning and transformation, numerical simulation, exploratory data analysis, data visualization, statistical modeling, machine learning and deep learning. It's amazing how much has come from a chance encounter in a doctoral program back in 1996.

Disclosure: I work for AWS, but the views expressed herein are mine.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Read the original here:
Jupyter has revolutionized data science, and it started with a chance meeting between two students - TechRepublic

The Shed Plans to Reopen for Covid-Tested Audiences – The New York Times

The New York City arts scene is about to pass another milestone on the road to reopening: The Shed, a large performing arts venue in Hudson Yards, said Wednesday that it would hold a series of indoor performances next month for limited audiences in which everyone has either been tested for the coronavirus or vaccinated against it.

The Shed said it would present four events next month: concerts by the cellist and vocalist Kelsey Lu, the soprano Rene Fleming and a string ensemble from the New York Philharmonic, and a comedy set by Michelle Wolf.

Each of the performances will be open to up to 150 people, all masked, in a space that can seat 1,280. The Shed said it would require patrons to present confirmation of a recent negative coronavirus test, or confirmation of full vaccination; requiring testing allows the Shed admit the largest number of audience members allowed under state protocols.

In these first steps, theres limited capacity, but you have to start somewhere, said the Sheds artistic director, Alex Poots. Those first steps are really important for us, for our audiences and for our artists just the idea that we might return to something joyful.

The Shed is the third New York City arts presenter to announce this week specific plans for a resumption of programming, following last weeks announcement by Gov. Andrew M. Cuomo that arts and entertainment organizations could begin presenting indoor work for limited-capacity audiences. On Tuesday, the commercial producer Daryl Roth said she would present Blindness, an audio adaptation of the Jos Saramago novel, to audiences of up to 50 at her Union Square theater, and the Park Avenue Armory said it would present a series of music, dance, and movement works, starting with a piece by Bill T. Jones for an audience of 100. The Armory said it would require ticket buyers to take an on-site rapid coronavirus test, for free, before entering.

Poots said the Shed would start with music and comedy because both have universal appeal, and they also align well with the guidelines that have emerged.

It gets far more complex when you get into more intricate art forms that require a lot of costume changes or close-up rehearsal, he said. The productions are small, but not tiny; Lu will be accompanied by 14 musicians, and the Philharmonic ensemble will include 20 players. None of the performances will have intermissions.

The first performer, Lu, is planning to present an opera called This is a Test.

I have been waiting for this day its been too long, Lu said. Theres nothing like that exchange between audience and performer. Its left a void for me and so many of us.

The Shed, like many arts institutions, canceled programs starting March 12 of last year. Since then, it has presented a visual art exhibition, of work by Howardena Pindell; a filmed rendition of a play, November by Claudia Rankine, and an online digital works series. But these April events will be the first live performances with paying audiences. The Shed has some considerable architectural advantages under the circumstances it is a new building with a state-of-the-art HVAC system capable of fully refreshing the breathable indoor air every 30 minutes, and its 18,000-square-foot main performance space opens directly to the outside.

The Shed is planning to follow the April performances by, in May, hosting the Frieze New York art fair for the first time, and in June, hosting Open Call, a program for early career artists, as well as programs in collaboration with the Tribeca Film Festival. Poots said that he hopes that by fall, things will be getting a lot easier, in terms of capacities and regulations.

Read this article:
The Shed Plans to Reopen for Covid-Tested Audiences - The New York Times

itbusinessedge > Articles > Best Platform-as-a-Service Tools 2021 – IT Business Edge

Software development is no easy task, and platform maintenance, resource planning, and buying the right software licenses can further complicate it. Platform-as-a-service (PaaS) solutions remove some of these complexities, allowing developers to focus on what they do best. With PaaS, developers only have to worry about managing the applications or software they develop, and the PaaS provider handles everything else, including any platform maintenance, development tools, and database management.

Imagine how much more productive your developers can be when they dont have to worry about maintaining a development platform. To find the best PaaS tool for your business, you need to have an idea of what youre going to use it for and how much experience your developers have. For novice devs creating simple apps, consider low-code applications. If youre confident in the skills of your team, you should look for pay-as-you-go platforms that only charge while your code is running. And if you offer custom-built websites, choose a platform that lets you build, design, and host them all in one place. Once youve got your shortlist narrowed down, take advantage of free plans and trials when you can to find the platform that fits your needs.

To make it easier for developers to find the right PaaS service, weve created this guide comparing the top platform-as-as-service tools of 2021.

Key takeaway: Google App Engine is a solid choice for app developers who use major programming languages and dont want to handle their own maintenance.

Google App Engine offers a fully managed platform thats perfect for building both web and mobile applications. It supports the most popular coding languages, including Python, Java, C#, and PHP. Within App Engine, you get solid logging and monitoring tools to help you diagnose the health of your app, allowing you to identify and fix bugs quickly. The service runs on a pay-as-you-go model, so you only pay for the resources you use. Additionally, App Engine only consumes resources when your code is running.

Key takeaway: Plesk is best for web developers and designers who use custom code on their sites and need a platform that offers both development and hosting capabilities.

Along with application development, Plesk also provides a platform to create and host custom websites. The ready-to-code environment supports PHP, Java, Ruby, and most other major programming languages. Plesk is also available in 32 different languages. The self-repair tools allow you to handle technical issues without contacting support, and the Plesk mobile app lets you manage sites and servers on the go. Pricing is done on a monthly basis, and there are several different tiers available to fit your needs.

Key takeaways: AWS Elastic Beanstalk is best for applications that have already been coded and just need to be deployed or scaled.

AWS Elastic Beanstalk helps developers deploy and scale applications theyve already created. Developers simply need to upload the code into the platform, and Elastic Beanstalk automatically deploys it, including monitoring the applications health and load balancing. It supports popular coding languages like Java, Ruby, Go, and Docker and familiar servers, including Apache, Passenger, and Nginx. As a service, Elastic Beanstalk is free to use; developers only pay for the AWS resources they use to store and run their applications.

Key takeaway: Platform.sh is a strong contender for developers who need a platform that supports both application development and web design.

Platform.sh allows you to develop, deploy, manage, and secure applications and custom websites from a single platform. The tool supports a large number of coding languages and frameworks, including Ruby, Drupal, WordPress, and Python. The Source Operations feature enables your code to update itself to cut down on your maintenance time, although you do need to upgrade from the basic package to get this option. There are three pricing tiers for you to choose from, and you can add more storage to each plan as needed.

Key takeaway: Azure Web Apps provides a solid, pay-as-you-go option for developers looking to build Windows or Linux-based applications.

Azure Web Apps offers a platform with continuous deployment and support for both Windows and Linux platforms. The tool offers source code integration from GitHub, one-click publishing from the Microsoft Visual Studio, and live debugging to improve the productivity of your development team. Azure web apps also provide an end-to-end view of your applications health, allowing you to make calculated decisions on how to best improve your apps. There are six pricing tiers to choose from, and costs are billed hourly based on the resources you use.

Key takeaway: IBM Cloud Foundry provides an open-source platform that gives developers a community of support and extra resources to improve their applications.

IBM Cloud Foundry is an open-source PaaS tool that prioritizes the speed and ease of development. Third-party services like APIs or community build packs are available through a marketplace to improve functionality and give developers a community of support. Cloud Foundry allows you to customize your development experience thanks to several different hosting models. Additionally, the platform is fault tolerant it automatically replicates if an instance fails or duplicates if it needs more performance. There is a free tier available, or you can pay for resources as you use them; there are no upfront costs.

Also read: Changing Application Landscape Raises New Cybersecurity Challenges

Key takeaway: Zoho Creator is a great option for developers with little coding experience thanks to the low-code nature of the platform.

Zoho Creator is a low-code app development platform allowing you to build both simple and complex applications. The tool offers pre-built templates, visual builders, and code editors to simplify the development process and add automations, improving workflow management. Because the platform is low-code, its designed to be used by anyone, not just highly skilled developers. There are three pricing tiers to choose from, and you can take advantage of a 15-day free trial.

Also read: No Code and Low-Code Coupled with SaaS Platforms Rise to the COVID-19 Challenge

Key takeaway: Dokku is a free, PaaS platform best for developers looking to build applications on a budget.

Dokku is a PaaS tool powered by Docker that can be installed on any hardware or cloud provider. You can write plugins in any language and share them online with others, or you can take plugins that others have made and extend them to fit your needs. The platform is free; all you have to do is install the code on your hardware or cloud environment, and you can be up and running in just a few minutes. Once its live, you can use Git to add Heroku-compatible applications.

Key takeaway: Salesforce Platform is designed for companies already using Salesforce that want to build applications to improve its functionality.

Salesforce Platform allows you to tailor Salesforce to meet all of your companys needs. You can add artificial intelligence (AI) to your apps and code in the language youre most comfortable with using Heroku. Not only can you build apps to improve Salesforces functionality, but you can also customize the user interface to better fit your companys needs. Salesforce Platform is an add-on for the CRM software, so you will need a plan to get access.

Also read: Salesforce Extends Scope of Customer Experience Management Effort

Not all of the products on this list are going to be right for every company. You need to determine your priorities and ensure your developers have the necessary expertise to use the platform you choose.

If youre designing add-ons for software you currently use, like Salesforce, check their offerings to see if they provide any kind of PaaS before you invest in your own. Not only will this save you money, but youll know the application you create will be able to fit into the existing software.

The nice thing about many PaaS solutions is that you pay as you go, so you can try out a few different options before deciding on the right one for you.

Read next: Best Practices for Application Security

Read more:
itbusinessedge > Articles > Best Platform-as-a-Service Tools 2021 - IT Business Edge