Appsmith Announces $10.5 Million Series A and Seed Funding Rounds to Develop Low-Code Open Source Software – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Appsmith, the first open source low code software helping developers build internal tools, today announced that it has raised $8 million in Series A funding. Led by Canaan Partners, with participation from additional investors Accel Partners, Bessemer Venture Partners, OSS Capital and angel investor Prasanna Sankar, co-founder and chief technical officer, Rippling, the round follows an earlier seed round from Accel of $2.5 million, bringing the total funding to $10.5 million.

The company was founded in mid-2019 and its open source software has been downloaded over 5 million times with users at over 1,000 enterprises in 100-plus countries. It has over 5,000 stars on GitHub and 130 contributors -- 100 of those from outside the company. Appsmith is the first open-source low code software that helps developers build custom (often critical yet tedious) internal and CRUD (create, read, update, and delete) type applications quickly, usually within only hours. For example, a utility company had a complex requirement for a single view of customer data that pulls data from 17 different sources like Zendesk, multiple databases, Salesforce and more. One software engineer created the app in two days that would have taken months of work, and is now used by over 200 users in the company.

Every enterprise needs to create custom applications -- a slow, repetitive, expensive process -- that requires work to build the user interface, write integrations, code the business logic, manage access controls and ultimately deploy the app. By comparison, Appsmith is 10 times faster at enabling software engineers to build the user interface with pre-built components, code the business logic by connecting APIs (application programming interfaces) along with any database, then test and deploy a web application. Companies dedicate anywhere from 10%-40% of their engineering resources to these internal tools. According to Gartner, the low code development technology market is $13.8 billion.

"The low-code market is greatly underestimated and will grow fast as developers adopt new platforms like Appsmith to automate processes required in building custom software," said Joydeep Bhattacharyya, general partner, Canaan Partners. "Appsmith's open-source approach prioritizes the developer experience while also providing flexibility not possible with traditional SaaS. The team is seeing tremendous interest from many sectors and for many different use cases, which only highlights the universality of the problems Appsmith solves."

Appsmith is addressing the crushing shortage of developers and the need for simplifying the development process through automation, said Shekhar Kirani, partner, Accel. The custom internal and CRUD applications are the workhorses of every enterprise which rely on those apps in their operations. Everyone is looking for a solution to turn these around faster and more efficiently. The Appsmith team is well qualified and showing great progress in delivering its open source technology to help enterprises deal with the backlog of internal apps.

Everything we do is with the developer in mind to enable every company to build great software that addresses their most pressing internal needs, said Abhishek Nayak, co-founder and CEO, Appsmith. Were taking the pain out of developing internal apps by delivering a highly-customizable platform that uses a building block approach that is open source software freely available to anyone to make it ultra-easy and fast for software engineers to build apps.

About Appsmith

Appsmith was founded in 2019 with the mission to enable backend engineers to build internal web apps quickly with a low-code approach. Taking an open source software approach provides anyone with access to the software and the opportunity to get involved in the community. The company has offices in San Francisco and Bengaluru, India. For more information visit https://www.appsmith.com.

More:
Appsmith Announces $10.5 Million Series A and Seed Funding Rounds to Develop Low-Code Open Source Software - Business Wire

Polyglot Programming and the Benefits of Mastering Several Languages – Analytics Insight

Did you know that there is a group of African languages where there are no separate words for green and blue?Micha Mela, a fan of natural language grammars,asks me.

In Russian, on the other hand, there are two words for blue: one is dark blue and the other is for the color of clear sky. It has been experimentally proven that these language features translate into the practical ability to recognize colors. Language influences how we perceive the world. The same applies to programming languages.

Micha is not only a fan of neurolinguistics, but also a professional polyglot programmerhe knows Java, Groovy, Kotlin, Scala, JavaScript, some Ruby, Python, and Go, as well as curiosities such as Ceylon and Jolie.

Where did the idea for such a range of competencies come from?In the world of professional programmers, there is a controversial statement that almost every seasoned developer has come across: a good programmer should learn at least one new language a year.

This opinion is over 20 years old and was formulated in the bookPragmatic Programmer,a classic that invariably inspires successive generations of IT specialists.

The idea of learning a new language each year was controversial as early as 1999 when it was articulated, but today the situation is becoming even more confusing. Multiple languages can be used in several ways. Functional and object-oriented programming, even in the same language, can be a more unfamiliar experience than simply learning a new language from the same family.

Whats more, even within the monolingual ecosystem, there are frameworks that differ so far in their philosophy that switching between them is like switching languagesjust compare React, Angular, and Svelte.js.

Despite the controversy, every experienced programmer can code in more than two languages, and some of the code in several or even a dozen languages.

For some of them, its a side effect of functioning in the world of dynamically developing information technology; for others, its a conscious choice. The best engineers Ive worked with often repeat the same mantra: Im not a Java/Python/JavaScript programmer, just a programmer. Languages are my tools.

Have polyglot programmers had the opportunity to use so many languages in their professional life? Mostly yes, although the greatest enthusiasts also learn experimental and historical languages, with no prospects for commercial use. We are talking about languages such as OCaml, LISP, Haskell, and Fortran.

Its worth adding that the above does not include esoteric languages, i.e. those belonging to the just for fun category: Whitespace, LOLCODE, or Shakespeare.

So, what motivates these developers to learn new languages? The first answer is far from surprising. I remember Rubys fall,Marek Bryling, a programmer with over 20 years of experience,tells me. People who have been in software for a long time have to learn many languages over the years. Thats the reality.

The younger generation is also familiar with the memento Ruby argument.The decision to learn a new language is about career planning and risk diversification.Just look at Ruby, Micha says.

Most often, however, the people I spoke to learn new languagesad hoc: by encountering new technological or market challenges. The labor market used to be different than it is today. It was often easier to find a job in something completely new,Kamil Kierzkowski, a senior full-stack developer at STX Next,recalls.

So is learning new languages simply opportunistic adaptation to the labor market? Absolutely not! New languages clearly have the power to shape programmers, redirect their thinking, and broaden their horizonsand thats not the only advantage they bring to the table.

Let me quote a classic, Micha clears his throat as he quotes Edsger Dijkstra, a pioneer of computer science. It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.

As you can see, the battles of the supporters of individual technologies go back to the pre-internet era. It turns out that in the world of polarised opinions, being a polyglot can be very helpful. I know enough languages to know what suits me,Marcin Kurczewski, an expert in over 10 programming languages,tells me. Knowing many schools of programming gives me perspective.

Having this broad horizon allows you to form your own opinions about technology, but it also gives you the advantage of being more exposed to new products.

Its obvious for Python programmers to use Prettier, Black, and other codes autoformat tools, Marcin points out. When I recently started contributing to an open-source C/C++ project, I was surprised to discover that the projects technical leader rejected similar tools that are now becoming popular in the C/C++ world. He used arguments that Python zealots used 10 years ago.

Micha echoes him: Java8 finally introduced Lambdas. A lot of purists complained:What have you done? You have destroyed this language!he laughs. I knew Lambdas from a different language, I had already figured out what their advantages were, and I quickly got the hang of using them in Java.

Interestingly, today, when more and more people begin their adventure with programming from high-level languages, it turns out to be invaluable to gain experience starting from the very basics.

For example, working with C++ helps. Thanks to C++, I understood how my computer and everything I run on it works, Marcin continues. Knowledge of concepts such as stack, heap, registers, memory management is useful in working with a computer, no matter what language you use.

Marek supports this opinion and gives a specific example from his own area of interest: Python has an interesting feature: weak references that dont increment the garbage collectors reference count. This is a very useful mechanism, but most people dont understand how it works because they dont know memory management from other languages.

This trail leads us tothe strongest argument for learning new languages: this practice develops the programming skills we use in the main language we specialize in.One developer convinced of this isMaciej Michalec, author of the polydev.pl blog.

Problem-solving approaches in different paradigms differ significantly, he notes. Python is a nice example of a language where you can write in an object-oriented and functional manner, and its useful to know the different paradigms from other languages so that you can use them in Python.

Thanks to the fact that I know how something is done in one language, I can better implement it in Python, Marek adds. Thats how async.io was created, being mapped from the node. This flow of inspiration is possible when we know several languages and this knowledge goes beyond the syntax itself. Its like travelingthe more countries you visit, the more your mind opens up, he concludes.

In our conversations, we also delve into the topic of the future. What new languages and frameworks will be created and popularised on the market? Who will create them? Is it possible that polyglots will also play their part in this avantgarde programming?

Definitely, and especially those who like history, Marek says. After all, in recent years, we have gone back to the 1960s and we are processing what was invented then: event architecture, microservices, functional programming, he says.

The cloud? Its an extension of mainframes. Even dockers result from processing our previous concepts, such as JAIL or LXC containers. What finally grew out of it was Docker.

So, whats ahead? What other languages will gain popularity? Will there be more or fewer of them? Opinions are divided.

I can see a certain consolidation trend in relation to a few languages like JavaScript and Python, but in my lifetime, we wont get to any programming lingua franca,Marek says. I am concerned, though, that in some time 90% of programmers will only be able to do high-level programming. The same is already happening with DevOpsfew can still work on bare-metal because everyone migrated to the cloud.

Were not threatened by monolingualism, Maciej concludes. PureScript and V are exciting new players. There will be more and more new languages, but at the same time, it will be harder and harder for them to breakthrough. Today, a rich ecosystem and the support of community developers are of key importance for any language. You can see it in Scala, he sighs.

I love this language, but the community is very hermetic and pushes out those who havent been dealing with functional programming before. This affects the popularity of the language more and more.

The issues of community and ecosystem are also raised by Marcin, who is sceptical about Crystal, another contender in the crowded arena of programming languages. Crystal is a compiled Ruby, and its an interesting idea, but even the nicest, cleanest programming language is nothing without a solid ecosystem, which is missing there.

It seems that programming communities will decide the future of programming languages in a very democratic way, voting with their feet (or rather, with their fingers on the keyboard). In this vote, polyglots also have an advantagethey get more than one vote.

Author

Marcin Zabawa Director of Strategic Services, STX Next

See more here:
Polyglot Programming and the Benefits of Mastering Several Languages - Analytics Insight

Python programming bootcamps guide: Invest in a tech career with the right bootcamp – ZDNet

Have you ever wanted to break into tech, but were not sure where to go? Look into a coding bootcamp. Coding bootcamps are accelerated training programs that can prepare enrollees for entry level programming work. Bootcamps often center on a particular area of web or mobile development, or a particularly valuable coding language, such as Python. A Python bootcamp is an ideal opportunity to learn Python coding skills online quickly to prepare for working (and succeeding) as a coding professional.

As more aspects of our lives become governed by web and mobile technology and software, the demand for coders has gone up. Python programmers are in particularly high demand due to the high popularity and versatility of Python in many areas of tech, including data analysis, web development, and software engineering.

Experience with Python can serve as a gateway to a high-paying job as a developer, scientist, or engineer working in computer and IT services, marketing, or software development. And a coding bootcamp can provide that formative experience you need!

Read on for our guide to Python coding bootcamps: what they teach, how much they cost, and why what they teach should be important to programmers just getting started.

Since its inception in 1992, Python has become one of the world's most commonly-used programming languages. Python's popularity comes in part from the fact that it is a general purpose language that can be used for a variety of purposes, rather than being specialized to one discipline. Python is typically used for software engineering, web development, and data analysis and visualization. However, its applications are almost endless!

In data analysis and manipulation, Python is king. This is partly because it is very easy to use when conducting complex tasks such as creating various kinds of graphs, histograms, or 3D plots based on statistical data. Python is also an integral part of the emerging machine learning field, as it is used to create various complex algorithms.

Web development draws on Python to enable communication with databases and servers. Software engineering usually also uses Python to automate various tasks for testing out new features in software.

Coding bootcamps are short, intensive training programs for different topics in web and mobile development, such as a specific programming language. These programs differ from college or vocational school courses in that they are not accredited; however, they can be a valuable tool for people looking for a condensed introduction to coding that can prepare for entry level work.

The typical coding bootcamp takes between 10-17 weeks to complete. During this time learners may tackle topics such as front end web development, mobile app development, and user experience. The typical bootcamp covers the basics and gives an overview of what kind of work learners can expect.

The difficulty of getting into coding bootcamp varies by provider. Some more exclusive bootcamps may expect applicants to present a digital portfolio. Others may not. You can expect to fill out an application and complete an application essay, followed by an in-person or phone interview to get into most bootcamps.

Python is thought of as an ideal programming language for beginners. Though it takes years to master the different Python applications, one can easily learn the basics of Python within a few weeks. For this reason, it is often taught in coding bootcamps. Some basic skills taught in Python bootcamps include:

Different bootcamps vary by which skills they cover, however.

Coding bootcamps run a wide gamut in terms of pricing. Python bootcamps can cost anywhere between $1,500-$20,000. The price tag attached to your training experience depends on your provider, whether you attend full or part time, and program length. Here's a representative sample of how much some of the most reputable Python bootcamps cost, as of 2021:

Many careers in the tech field rely heavily on Python, including:

However, professionals in many different industries often use Python as a tool for automation or data analysis. Chances are that Python could prove a great help to you too!

One major advantage to learning Python is its applicability to a variety of well-paid job titles. In most careers where Python is a foundational skill, you can expect to make anywhere between $60,000-$120,000 annually. Check out the following list of careers and their median annual salaries, courtesy of the Bureau of Labor Statistics (BLS):

Of course, the upper percentile of earners in these professions can easily earn even more.

There are many advantages to learning Python. As a general purpose language, Python's applications extend across many different fields. There is currently high demand for tech professionals who know Python, giving you many options to switch careers or advance professionally.

Python's simple syntax makes it intuitive for beginners to understand. If you are new to programming, Python can encourage you to try learning other programming languages and expanding your skills.

Because Python is an open source language, it is easy to get feedback on your code from the global coding community. There are also many online courses that give the opportunity to learn Python for free.

Coding bootcamps differ from college courses and programs in that they are typically offered by for-profit companies and lack official accreditation. However, graduating from a coding bootcamp shows initiative and willingness to learn, which makes a generally positive impression on employers. Completing a coding bootcamp will give your resume a boost, along with your odds for getting a tech-related job. Additionally, some employers offer tuition reimbursement for attending coding bootcamp.

Some Python bootcamp providers are affiliated with universities, such as Bottega University. Other providers remain independent. Occasionally, Python bootcamps offer job placement guarantees or promise tuition refunds for qualifying graduates who cannot get a job within a certain time frame.

The following list of Python bootcamp programs represents some of the most highly regarded bootcamps for specifically learning Python in the country. Read on to learn what makes each program unique to see which is a good fit for your plans.

Lambda Schools' data science and full-stack coding bootcamps cover Python in a learning experience where enrollees complete 900 hours of coding. The program is online-only and gives learners a choice between part or full-time learning. Tuition for the program starts at $30,000, however, Lambda School, an independent provider, stands out for its reasonable payment plan options.

Yes. Coding bootcamps give you the opportunity to take in a lot of information in a short time, while showing future employers your willingness to learn new skills.

There is no single "best" Python coding bootcamp. The best Python coding bootcamp should for you be the one that best meets your unique needs as a learner.

Coding bootcamps are a great place to start if you are a coding novice. This is especially applicable to Python, one of the most easily accessible programming languages.

See the original post:
Python programming bootcamps guide: Invest in a tech career with the right bootcamp - ZDNet

Top Programming Languages that Will Become Dominant in 2022 – Analytics Insight

Programming languages are computer languages that are used by programmers (developers) to communicate with computers. It is a set of instructions written in any specific language (C, C++, Java, Python) to perform a specific task. A programming language is mainly used to develop desktop applications, websites, and mobile applications. Here are the top languages that will be most popular in 2022.

Python was built by Guido van Rossum in the late 1980s in the Netherlands. Initially built as a competitor for Java in the industry, Python slowly shot forward in popularity. Currently, Python has built huge popularity among both the researcher as well as the developer community. Python sits at the top of the language ranking for the IEEE Spectrum, having a score of a perfect 100. Moreover, Python also commands respect and has a support percentage of 44.1%.

Python is suitable for pretty much anything. You have Django and Flask which can be utilized for web development, while scientific tools like Jupyter and Spyder are used for analysis and research purposes. If youre into automation, Selenium is out there to help you! The flexibility of the language allows Python to be used pretty much anywhere. These, by far, are the more popular products of Python. Pythons huge support base (second only to that of JavaScript) produces tons of packages, frameworks, and even full-fledged open-source software using the language.

Python probably has the largest support for data science and machine learning in general. While there are other languages like R and MATLAB which do offer competition, Pythons the strict ruler of the data science space. A majority of the frameworks and libraries used in machine learning are made in Python only, making it probably the best language to pick up if one wants to learn about machine learning (or data science in general).

JavaScript is pretty much the industry leader at this point. Built originally as a scripting language for Netscape Navigator (one of the best browsers back in the day) in 1994, JavaScripts ascent to greatness has been swift. It wasnt until 2008 that modern-day JavaScript was devised by Google when they built the V8 engine for Google Chrome. Originally built as a competitor to Java by Netscape, JavaScript now commands a space of its own in the development sphere. JavaScript is widely favored as the language of the internet because of its popularity. JavaScript enjoys the highest support amongst developer communities as high as 67.7%. In general, JavaScript is suitable for any kind of development activities like mobile app development, web development, desktop app development, and so on.

JavaScript has a wide variety of libraries and frameworks which can be utilized during development. Theres Angular, Vue, and React for frontend development, while Node.js is a very flexible language for working on the backend. Jest and Mocha are two flexible tools that help set up unit tests to check if the functionality is working as intended or not. Of course, if youre not very comfortable with either of these, you can just go for vanilla HTML, CSS, and JavaScript for the frontend its that simple! Because of the enormous support from developers around the world, JavaScript has the largest number of support packages that any language can boast about. Despite that, people continue to build more and more packages to add to the ease of using the language.

Built in 1991 by James Gosling, Mike Sheridan, and Patrick Naughton as the language Oak, Java was the first language to have a big global impact. While the new programming language used the same format as C/C++, it incorporated certain new ideas to make it more appealing to more people. Java runs on the principle of Write Once, Run Anywhere implying that systems with varying hardware and OS configurations can run Java programs with ease.

Java also has a wide variety of libraries and frameworks which utilize Java under the hood. Java is used for app development through Spring and Hibernate. JUnit helps us set up unit tests for our Java projects. Most importantly, Java is being used in the development of native Android applications (the Android SDK is itself powered by the Java Development Kit or the JDK). Java is probably the language that most people were introduced to as part of an introductory computer programming course in college or in school. Java is the language used for teaching object-oriented programming to the masses.

Java is also highly respected in the field of analytics and research. The only problem with Java is that there are very few support packages and projects for the language at present. Theres very little community involvement something that most mainstream languages have. Despite that, Java is a language that is very easy to pick up and learn partly explaining the appeal for the language. However, it does take some time for one to attain some form of mastery over the language.

Perhaps one of the most shocking answers that one can expect in this article is C++. Despite being the language that most people use to learn the concepts of data structures and algorithms, the language itself finds little usage in the practical world. First created by Bjarne Stroustrup as an extension of the C programming language in 1982, C++ went on to make a name for itself in the years to come.

C++ finds use in analytics, research as well as in-game development. The popular game development engine the Unreal Engine uses C++ as the scripting language for all of the functionality one can define while building a game. C++ also finds extensive use in software development. Being mid-way between the object-oriented approach and the method-oriented approach allows C++ to be flexible in the nature of software that can be produced using it. Being located 4th in the TIOBE index signifies that C++ continues to have an appeal to this day. C++ is also extensively used in system software development, being easier to understand than other languages. The main reason for using C++ in a sensitive area like the OS is that C++ programs have a very low compilation time.

C++ probably has the largest learning community among all of the languages. Most students would start their algorithms courses building trees, linked lists, stacks, queues, and numerous other data structures in C++. Naturally, it is quite easy to pick up and learn as well as easy to master if one pays attention to details.

TypeScript is the superset of JavaScript and has almost the same applications as JavaScript. TypeScript can be used in web development, mobile app development, desktop app development, and so on. TypeScript is the second most popular language as mentioned by StackOverflows list of most loved languages, being loved by 67.1% of developers (being second only to Rust).

TypeScript is mainly a language meant for development, so it does not have much appeal to the scientific community. However, because of the new features of TypeScript, one can expect that it might inspire a slightly greater degree of interest for research. The language has a much lower skill ceiling than JavaScript and many difficult-to-understand behaviors of JavaScript have been simplified in TypeScript. In other words, you have a slightly less chance of knocking your head into a wall.

New languages are sharply rising on the horizon, with new contenders coming up to challenge the throne owned by JavaScript and Python. Being made by Google (both have Go in their names!) mainly to advance the cause of functional programming, Golang has built up a mass following within a short time. Golang has already made it the fifth-best language to learn by StackOverflow, being adored by 62.3% of developers.

Golang is used in multiple areas, both for developing robust software as well as the backends used for web and mobile applications. Currently, Golang even supports some rudimentary amount of web development. While its still not in a phase to replace JavaScript as the language of the web, it is fast becoming a language that supports the next phase of the web.

Golang is slightly more difficult to learn than the other languages on this list. Moreover, Golang is an open-source language that frequently changes with every major update, so staying updated is a necessity.

Dart is one of the fastest-growing languages in the industrial sphere. Googles contribution in the sphere of languages has significantly increased to compete with the increase in popularity of Microsofts TypeScript. Dart has been highly adored by programmers around the world for its simplicity.

Dart is used in multi-platform application development. Like JavaScript, Dart is used for building software that can be run by anyone and everyone with an electronic device. The most famous use of Dart currently is in the framework of Flutter, a language used for mobile app development. Recent Google trends have shown that Flutter, despite being a newer framework, is more popular than React Native, a mobile app development framework already established in the industry.

Dart is simpler to learn than JavaScript and manages to simplify even difficult-to-understand cases really well. With TypeScript and Dart both in the market, programmers are spoilt for choice when it comes to choosing a language they really want to pick up.

View original post here:
Top Programming Languages that Will Become Dominant in 2022 - Analytics Insight

From tech tool to business asset: How banks are using B2B APIs to fuel growth – McKinsey

Each time someone searches for flights on a travel aggregation site or shops online, APIs (or application programming interfaces) work in the background to make this happen. These lines of software let two different systems communicate and exchange data with one another, whether flight information from airlines or updated inventory from suppliers. Compared with traditional point-to-point integrations, APIs are flexible, cost-efficient, and easy to operate.

In the financial services industry, APIs are transforming the way B2B banking is done. As an easy means of money and data transfer between a banks systems and those of third parties, these tools pave the way for banking services to be embedded directly into a corporate customers own platforms. Instead of having to go to a banks own app, portal, or website, for instance, APIs enable customers to link their enterprise, treasury, and accounting systems (such as SAP or Xero) with financial information provided by banks. These companies can then initiate payments, manage liquidity, and download bank statements through their own systems, with the bank, essentially, becoming invisible.

APIs also enable banks to offer trade-financing services on new B2B platforms, such as PrimeRevenue, Taulia, and Tradeshift. These companies, which provide businesses with working capital finance solutions, let corporate clients offer their suppliers early-payment options or automation of invoice processing.

To understand what stage the banking industry is at in this transformation, we surveyed financial institutions of varying sizes (local, multiregional, and global). This research, part of McKinseys latest global survey on the State of APIs in Global Transaction Banking (GTB), found that, on average, just over half of a banks B2B APIs are currently used to connect its internal systems, such as front-end servers to back-office servers. However, in the next three years, this ratio will shift with most new APIs connecting banks to systems outside the organization.

Although APIs represent a significant disruption to the way B2B banking services have traditionally been delivered, they also offer significant opportunities. In the same way that APIs make many online products and services possible for consumers, they open up a wide range of possibilities for banks: with potential to generate income growth from existing and new corporate customer segments, improve customer experience, and energize innovation.

To take advantage of this potential, banks will need to see APIs as not just a tech tool for software developers but an important strategic asset and mainstream business priority. This means building a wide bridge between the business and technology functions, which too often still operate as distinct areas. In our survey, we asked banks to rate the extent of their collaboration along five key dimensions:

Banks told us that, on average, they are more than halfway regarding the work they need to do (Exhibit1). We also asked them about the drivers behind their API efforts and how they are monetizing new products and services. Segmenting across size, geography, and maturity level, we identified a few key differences and findings.

Exhibit 1

Smaller domestic or regional banks are head-to-head with the big global and multiregional institutions. Both types of banks have identified APIs as a strategic priority and made similar overall progress, with some minor differences. Global and multiregional banks have progressed further in technology enablement, such as allowing developers at fintechs and other third parties to access their APIs and related SDKs on a convenient public portal. Domestic and regional banks, meanwhile, have been able to move more quickly to hire a substantial team of API developers, in part because they have the advantage of not needing to fill as many roles.

North American and AsiaPacific banks lead the pack. Banks in these regions have an average maturity of 70 percent, followed by Europe (65percent), and Middle East, Africa, and Latin America (55 percent). The maturity of North American banks is driven largely by the clarity of their business-backed strategies and their ability to secure and prioritize key talent. At AsiaPacific banks, however, the go-to-market approach is significantly more mature.

For instance, DBS RAPID, the API-powered digital solution from Singapores multinational DBS Bank, offers its corporate customers a wide range of real-time banking transactions and services that can be integrated into their systems or platforms. A leading insurance company is using the solution to offer its customers quicker payment of travel insurance claimsfrom a few days to just seconds. Similarly, a ride-hailing company uses DBS RAPID to let its drivers cash out their earnings instantaneously, instead of having to wait up to two working days.

Leaders are pulling well ahead of laggards on several dimensions. Banks that give the highest scores to their API maturity level have attracted and retained the right talent and invested in strong businessIT collaboration, including joint funding for the development of API-based products and services. They have also helped future-proof their technology by providing access to SDKs that enable other developers to build on top of a banks products and services. As a result, the banks in the top third of API maturity have been able to achieve a disproportionate impact regarding the effectiveness, breadth, and revenue-generation potential of new products and services (Exhibit 2).

Exhibit 2

More than 90 percent of financial institutions use or plan to use APIs to generate additional revenue among existing customers.

APIs are seen as drivers of new revenue. More than 90percent of respondents said they use or plan to use APIs to generate additional revenue among existing customers, with three-quarters saying they are looking for revenue streams from new customers. A related objective is the ability to innovate (three-quarters of respondents also said this), followed closely by the ability to integrate with third-party capabilities (72 percent). Finally, just over half of respondents said they want to use APIs to enhance operational efficiency, such as by improving and streamlining integration with a customers enterprise resource planning (ERP) system (Exhibit 3).

Exhibit 3

Customer fees for API calls are the go-to monetization model. When banks leverage APIs to launch new products or services, 80 percent charge customers fees to use them, for example by charging for real-time payment collections and reconciliation. The second most popular model is revenue sharing with an ecosystem partner63percent of respondents said they use this. One leading global bank, for instance, partnered to deliver compliance checksthat is, the flagging of potential money-laundering transactions. Finally, half of all respondents said they generate value through data and analytics-driven insights, such as information on liquidity management and payment flows.

Financial institutions that have moved ahead in their use of APIs have successfully positioned these connectivity tools at the center of their business and innovation agenda. To do this, theyve taken five critical steps:

Establishing a holistic API strategy and road map. A banks plan for APIs has to be both wide and deepcrafted in close alignment with a banks broader channel and product strategywhile also being comprehensive and granular about the specific APIs needed for customer, partner, and public offerings. The go-to-market plan should be differentiated by geography and segment, as well as responsive to customer needs, customer onboarding complexities, shifting regulatory norms, and competitive threats.

Bridging traditional organizational silos. To achieve success, business and IT leaders have to work together to define, develop, and roll out a product-centric road map for new API-enabled products and services. To make sure this integrated operating model functions smoothly, KPIs and incentives should be aligned across functions and have clearly defined teams with an end-to-end ownership of API-enabled products. This is especially important considering that 30 percent of respondents in our survey acknowledged that no one in their organization has this end-to-end decision-making authority and oversight of APIs.

Making APIs central to the customer proposition. Banks need to have a clear view of how their API-enabled products and services are attractive for customers. When client strategies and new propositions are being formulated, product-development teams must consider the ways in which APIs can open up new features, services, or customer-experience enhancements. This includes a deliberate focus on customer onboarding and the overall usage experience.

Leading players are seeking out IT and business talent from fintechs and enterprise resource planning providers, particularly individuals with previous experience working with API-enabled banking services.

Finding different kinds of talent. The types of people banks need to hire is changing. In addition to hiring from other banks or incumbent payment providers, leading players are seeking out IT and business talent from fintechs and ERP providers, particularly those individuals with previous experience creating or working with API-enabled banking services. For IT, an open-source approach to development is a must, including the publishing, continuous monitoring, and improvement of SDKs on public portals. This is especially important considering that most of the growth in APIs at banks is expected to be for external connections.

Innovating and broadening API offerings. Thus far, most banks have used APIs primarily to connect their internal systems or serve existing corporate customers with basic features like payments. In our survey, over 80 percent of respondents said they already offer or plan to offer their clients the ability to access accounts, do currency exchanges, and make domestic and cross-border payments using the clients own ERP or other systems. In other words, instead of having to access a banks portal to do banking, a company can make payments to suppliers or vendors directly from internal systems. Such features are now table stakes.

For the next phase, banks will need to consider using APIs to embed more value-added services into their clients systems, such as the management of market investments, liquidity management, and invoice financing (the ability to borrow money against the amounts due from customers). Using APIs, banks can also let clients offer their customers options such as supply-chain finance (the ability for suppliers and vendors to get paid more quickly than they would otherwise). In our survey, we found that leading players are actively pursuing these untapped areas and expect to triple growth in these more sophisticated services over the next three years. Currently, 6 to 13 percent of banks say they offer factoring, documentary finance, supply-chain finance, and invoice finance services. Over the next three years, 32 to 46 percent say they plan to do so (Exhibit 4).

Exhibit 4

B2B APIs are here to stay. They are likely to become not only the most frequent bankclient interaction, but also primary facilitators of accelerated product innovation and the means by which banks and their clients integrate with fintechs and the platform economy.

Banks of all sizes and in all regions have already started on their B2B API journey, with the gap between leaders and laggards becoming evident. However, the marketplace remains in flux and significant opportunities still exist for banks that successfully expand their API-enabled offerings, particularly in the trade and liquidity area. Over the next three years, organizations that actively pursue a comprehensive API approachencompassing strategy, operations, technology, talent, and implementationcan drive growth and position themselves at the forefront of a transforming financial services industry.

Read this article:
From tech tool to business asset: How banks are using B2B APIs to fuel growth - McKinsey

Can Intels XPU vision guide the industry into an era of heterogeneous computing? – VentureBeat

This article is part of the Technology Insight series, made possible with funding from Intel.

As data sprawls out from the network core to the intelligent edge, increasingly diverse compute resources follow, balancing power, performance, and response time. Historically, graphics processors (GPUs) were the offload target of choice for data processing. Today field programmable gate arrays (FPGAs), vision processing units (VPUs), and application specific integrated circuits (ASICs) also bring unique strengths to the table. Intel refers to those accelerators (and anything else to which a CPU can send processing tasks) as XPUs.

The challenge software developers face is determining which XPU is best for their workload; arriving at an answer often involves lots of trial and error. Faced with a growing list of architecture-specific programming tools to support, Intel spearheaded a standards-based programming model called oneAPI to unify code across XPU types. Simplifying software development for XPUs cant happen soon enough. After all, the move to heterogeneous computingprocessing on the best XPU for a given applicationseems inevitable, given evolving use cases and the many devices vying to address them.

KEY POINTS

Intels strategy faces headwind from NVIDIAs incumbent CUDA platform, which assumes youre using NVIDIA graphics processors exclusively. That walled garden may not be as impenetrable as it once was. Intel already has a design win with its upcoming Xe-HPC GPU, code-named Ponte Vecchio. The Argonne National Laboratorys Aurora supercomputer, for example, will feature more than 9,000 nodes, each with six Xe-HPCs totaling more than 1 exa/FLOP/s of sustained DP performance.

Time will tell if Intel can deliver on its promise to streamline heterogenous programming with oneAPI, lowering the barrier to entry for hardware vendors and software developers alike. A compelling XPU roadmap certainly gives the industry a reason to look more closely.

The total volume of data spread between internal data centers, cloud repositories, third-party data centers, and remote locations is expected to increase by more than 42% from 2020 to 2022, according to The Seagate Rethink Data Survey. The value of that information depends on what you do with it, where, and when. Some data can be captured, classified, and stored to drive machine learning breakthroughs. Other applications require a real-time response.

The compute resources needed to satisfy those use cases look nothing alike. GPUs optimized for server platforms consume hundreds of watts each, while VPUs in the single-watt range might power smart cameras or computer vision-based AI appliances. In either example, a developer must decide on the best XPU for processing data as efficiently as possible. This isnt a new phenomenon. Rather, its an evolution of a decades-long trend toward heterogeneity, where applications can run control, data, and compute tasks on the hardware architecture best suited to each specific workload.

Transitioning to heterogeneity is inevitable for the same reasons we went from single core to multicore CPUs, says James Reinders, an engineer at Intel specializing in parallel computing. Its making our computers more capable, and able to solve more problems and do things they couldnt do in the past but within the constraints of hardware we can design and build.

As with the adoption of multicore processing, which forced developers to start thinking about their algorithms in terms of parallelism, the biggest obstacle to making computers more heterogenous today is the complexity of programming them.

It used to be that developers programmed close to the hardware using low-level languages, providing very little abstraction. The code was often fast and efficient, but not portable. These days, higher-level languages extend compatibility across a broader swathe of hardware while hiding a lot of unnecessary details. Compilers, runtimes, and libraries underneath the code make the hardware do what you want. It makes sense that were seeing more specialized architectures enabling new functionality through abstracted languages.

Even now, new accelerators require their own software stacks, gobbling up the hardware vendors time and money. From there, developers make their own investment into learning new tools so they can determine the best architecture for their application.

Instead of spending time rewriting and recompiling code using different libraries and SDKs, imagine an open, cross-architecture model that can be used to migrate between architectures without leaving performance on the table. Thats what Intel is proposing with its oneAPI initiative.

oneAPI supports high-level languages (Data Parallel C++, or DPC++), a set of APIs and libraries, and a hardware abstraction layer for low-level XPU access. On top of the open specification, Intel has its own suite of toolkits for various development tasks. The Base Toolkit, for example, includes the DPC++ compiler, a handful of libraries, a compatibility tool for migrating NVIDIA CUDA code to DPC++, the optimization oriented VTune profiler, and the Advisor analysis tool, which helps identify the best kernels to offload. Other toolkits home in on more specific segments, such as HPC, AI and machine learning acceleration, IoT, rendering, and deep learning inference.

When we talk about oneAPI at Intel, its a pretty simple concept, says Intels Reinders. I want as much as possible to be the same. Its not that theres one API for everything. Rather, if I want to do fast Fourier transforms, I want to learn the interface for an FFT library, then I want to use that same interface for all my XPUs.

Intel isnt putting its clout behind oneAPI for purely selfless reasons. The company already has a rich portfolio of XPUs that stand to benefit from a unified programming model (in addition to the host processors tasked with commanding them). If each XPU was treated as an island, the industry would end up stuck where it was before oneAPI: with independent software ecosystems, marketing resources, and training for each architecture. By making as much common as possible, developers can spend more time innovating and less time reinventing the wheel.

An enormous number of FLOP/s, or floating-point operations per second, come from GPUs. NVIDIAs CUDA is the dominant platform for general purpose GPU computing, and it assumes youre using NVIDIA hardware. Because CUDA is the incumbent technology, developers are reluctant to change software that already works, even if theyd prefer more hardware choice.

If Intel wants the community to look beyond proprietary lock-in, it needs to build a better mousetrap than its competition, and that starts with compelling GPU hardware. At its recent Architecture Day 2021, Intel disclosed that a pre-production implementation of its Xe-HPC architecture is already producing more than 45 TFLOPS of FP32 throughput, more than 5 TB/s of fabric bandwidth, and more than 2 TB/s of memory bandwidth. At least on paper, thats higher single-precision performance than NVIDIAs fastest data center processor.

The world of XPUs is more than just GPUs though, which is exhilarating and terrifying, depending on who you ask. Supported by an open, standards-based programming model, a panoply of architectures might enable time-to-market advantages, dramatically lower power consumption, or workload-specific optimizations. But without oneAPI (or something like it), developers are stuck learning new tools for every accelerator, stymying innovation and overwhelming programmers.

Fortunately, were seeing signs of life beyond NVIDIAs closed platform. As an example, the team responsible for RIKENs Fugaku supercomputer recently used Intels oneAPI Deep Neural Network Library (oneDNN) as a reference to develop its own deep learning process library. Fugaku employs Fujitsu A64FX CPUs, based on Armv8-A with the Scalable Vector Extension (SVE) instruction set, which didnt have a DL library yet. Optimizing Intels code for Armv8-A processors enabled an up to 400x speed-up compared to simply recompiling oneDNN without modification. Incorporating those changes into the librarys main branch makes the teams gains available to other developers.

Intels Reinders acknowledges the whole thing sounds a lot like open source. However, the XPU philosophy goes a step further, affecting the way code is written so that its ready for different types of accelerators running underneath it. Im not worried that this is some type of fad, he says. Its one of the next major steps in computing. It is not a question of whether an idea like oneAPI will happen, but rather when it will happen.

Continue reading here:
Can Intels XPU vision guide the industry into an era of heterogeneous computing? - VentureBeat

Intel offers Loihi 2 to boffins: A 7nm chip with more than 1m programmable neurons – The Register

Robots with Intel Inside brains? That's what Chipzilla has in mind with its Loihi 2 neuromorphic chip, which tries to mimics the human brain.

This is Intel's stab at creating more intelligent computers that can efficiently discover patterns and associations in data and from that learn to make smarter and smarter decisions.

It's not a processor in the traditional sense, and it's aimed at experimentation rather than production use. As you can see from the technical brief [PDF], it consists of up to 128 cores that each have up to 8,192 components that act like natural spiking neurons that send messages to each other and form a neural network that tackles a particular problem.

The cores also implement the synapses that transmit information between the neurons, which can each send binary spikes (1s or 0s) or graded spikes (32-bit payload value).

An overview of the Loihi 2 chip architecture ... Source: Intel. Click to enlarge

Each neuron can be assigned a program written using a basic instruction set to perform a task, and the whole thing is directed by six normal CPU cores that run software written in, say, C. There's also external IO to communicate with other hardware, and interfaces to link multiple Loihi 2 chips together into a mesh as required. There are other features, such as three-factor learning rules, that you should see the technical brief for more details. The previous generation didn't have graded spikes nor programmable neurons.

The 'highlights' of the Loihi 2 per-neuron instruction set ... Source: Intel. Click to enlarge

There's a race to replicate the brain electronically to run powerful AI applications quickly and without making, say, a massive dent in the electric bill. Samsung just said it wants to put human-like brain structures on a chip. IBM is also developing hardware designed around the brain.

Intel's latest Loihi is ten times faster than the previous-generation component announced four years ago to this month, it is claimed.

"Loihi 2 is currently a research chip only," said Garrick Orchard, a research scientist at Intel Labs via email with The Register. "Its core-based architecture is scalable and could enable future flavors of the chip when the technology matures that could have a range of commercial applications spanning data center to edge devices."

Each Loihi 2 chip has potentially more than a million digital neurons, up from 128,000 in its predecessor. To put that in context, there are roughly 90 billion interconnected neurons in the human brain, which should give you an idea of the level of intelligence possible with this hardware right now.

The digital neurons compute asynchronously in parallel, and can be customized by their programming. Loihi 2 supports a maximum of 120 million synapses, compared to over a trillion synapses in the human brain, and has 2.3 billion transistors in a 31 mm2 die area. According to Intel, its digital circuits run "up to 5000x faster than biological neurons."

The chip is an early sample of the Intel 4 manufacturing node, which is the semiconductor giant's brand name for its much-delayed 7nm process node, and uses extreme ultraviolet (EUV) to etch the chips. Loihi 1 was made using a 14nm process.

"With Loihi 2 being fabricated with a pre-production version of the Intel 4 process, this underscores the health and progress of Intel 4," an Intel spokeswoman told us.

Intel, by the way, showed off the wafer of Intel 4 CPU family, code-named Meteor Lake and aimed at desktops and mobile PCs, at a press event in July, the first time it had done so. Chips using that microarchitecture are expected to ship in 2023, hence Loihi 2 is a glimpse of what's to come manufacturing-wise.

Intel is working with the research community to come up with applications for Loihi 2. Its predecessor was used to create systems to identify smells, manage robotic arms, and optimize railway scheduling.

There are no projects underway with Loihi 2 yet, though partners that worked with the original Loihi "have communicated their excitement for new capabilities within Loihi 2," Orchard said.

One such partner is America's Los Alamos National Laboratory, which is using the first-gen Loihi chip as an artificial brain to understand the benefits of sleep.

An open-source programming framework called Lava was introduced alongside Loihi 2 with which developers can write AI applications that can be implemented in the chip's neural network. The underlying tools will also support Robotic Operating System (ROS), TensorFlow, Pytorch and other frameworks.

The Lava framework is available for download on Github.

This neuromorphic hardware will be available to researchers via Intel's Neuromorphic Research Cloud. The available components includes the Oheo Gulch board, which includes a single-socket Loihi 2 linked to an FPGA. A system code-named Kapoho Point with eight Loihi 2 chips will be available soon.

Our friends over at The Next Platform have more analysis and info on Loihi 2 right here.

Excerpt from:
Intel offers Loihi 2 to boffins: A 7nm chip with more than 1m programmable neurons - The Register

FIWARE and The European Data Spaces Alliance – ARC Viewpoints

ARC was recently briefed by Ulrich Ahle, CEO, Juan Jos Hierro, CTO, and Cristina Brandstetter, CMO of the FIWARE organization. This blog emphasizes the technical aspects and their implications, where we will mix introductory content with recent developments communicated during the briefing. The blog concludes with highlights of FIWAREs three-year strategy.

FIWARE started with a framework of open-source platform components that can be assembled with third-party platform-compliant components to accelerate the development of smart, IoT-enabled solutions. These solutions need to gather, process, manage context information and inform external actors or parties, enabling them to access and update the context and keep it current. Context information are entities, characterized by attributes that can have values, for example, an entity car, with attributes, for example, speed with value 100 and location with values representing geospatial coordinates.

The FIWARE Orion Context Broker component is the core component of any Powered by FIWARE solution. This component includes an information model that can be configured by the user without programming or borrowing from the FIWARE-led Smart Data Model initiative. The user can interact with the context broker via a REST API, according to the NGSIv2 or NGSI-LD standards issued by ETSI.

This flexible information context management can be used to build digital twins, those of the type describing the information associated with an asset. As the data structures can handle estimations of values of attributes in the future, these can be linked to simulations that can provide those estimations. This would be a structured approach of linking simulations with asset information. Nothing withholds the user from documenting the location and the versions of the simulator to obtain a complete and consistent document of static, predicted, and possibly historical asset information. We would not suggest using context information to store process historical data, however just as in the case of future data, links to those can be documented with the asset information. FIWARE can be used in any vertical and is most often used in smart cities and mobility, smart industry (which includes smart manufacturing, Industry 4.0), smart energy, smart agri-food, and smart water.

FIWARE Connectors include connectivity to sensors (for instance IoT agents connecting IoT sensors), field instruments (with agents using OPC-UA connectors), robots, and classical on-premises applications such as CMMS, SCM, or MES/MOM. Connectors further include the context broker, and optionally stream processing engines, and connectivity to cloud platforms and smart applications in the cloud. The connector thereby provides great flexibility in connectivity without requiring programming.

FIWARE is applicable to all verticals and is the leader for context management in smart cities worldwide. Significantly, after a few years of experimentation with smart city platforms, the Indian smart city program (IUDX) decided on a countrywide unified platform based on FIWARE for future smart city implementations to gain efficiencies and synergies.

In smart industry solutions, the FIWARE context broker can be used to synchronize information between edge and cloud; and decouple cloud applications or services that use subsets of the same information pool. FIWARE supports smart industry users by providing information on performance under high loads (high frequency, high volume industrial data), and implementation guidelines to optimize for these high loads. Other FIWARE enablers can provide additional open-source applications, such as WIRECLOUD for dashboarding.

The context broker can be used across companies or organizational boundaries, while guaranteeing the control of owners over their data, via identity and API management. FIWARE demonstrated the capability to implement IDS-compliant data space connectors in the past. Building on these components and this experience, FIWARE has recently published an architecture for FIWARE-enabled data spaces. However, different implementations of the IDS reference architecture may not always be interoperable. To stimulate the usage of data spaces for the exchange of information among companies, the four major organizations in Europe promoting data spaces, IDSA, FIWARE, GAIA-X, and the Big Data Value Association (BDVA), have created the Data Spaces Business Alliance this week, providing a common reference model and harmonizing technologies; supporting users with tools, resources, and expertise; identifying existing and future data spaces, and promoting best practices, such as the recently published Design Principles for Data Spaces.

FIWARE has the vision to become the global enabler for the Data Economy. The strategy to reach that vision has the following pillars:

Growing the FIWARE ecosystem, both in terms of users, members, and developers, and levering large corporate accounts. Growing the market readiness of the technology, by increasing functionality, performance, and quality of the components in the open-source portfolio. Focused support of vertical industry domains, in order of priority: smart cities and mobility, smart industry, smart energy, smart agri-food, and smart water. Globalization, through partnerships with existing global members, promoting the NGSI standard with NIST and leveraging the FIWARE iHubs.

ARC observes that the FIWARE open-source platform has increased in maturity, both in terms of technology readiness for smart industry applications and also as a globalizing organization. The market vision and technology concepts seem very sound and promising to us. We encourage users to interrogate companies and applied research organizations about their experience with FIWARE and determine how the platform can add value. Because FIWARE is an open-source platform, the cost of using the technology is limited to building knowledge and implementing applications, a considerable advantage.

See more here:
FIWARE and The European Data Spaces Alliance - ARC Viewpoints

Top 10 Recent Chatbots to Make Note of in 2021 – Analytics Insight

Chatbots are utilized by 1.4 billion individuals today. Organizations are dispatching their best AI chatbots to carry on 1:1 discussions with clients and workers. Artificial intelligence-fuelled chatbots are likewise equipped for automating different assignments, including sales and marketing, client assistance, and operational tasks.

As the interest for chatbot software has soared to peaks, the commercial center of organizations that give chatbot innovation has become more earnestly to explore around with many organizations promising to do exactly the same thing. Nonetheless, not all AI chatbots are similar.

To help organizations of all sizes and sectors track down the best of the best, weve gathered together the best 10 most recent chatbots for explicit business use cases in various sectors:

Netomis AI platform assists companies with consequently settling client support tickets on email, talk, informing, and voice. It has the most elevated precision of any client care chatbot because of its high-level natural language understanding (NLU) motor. It can consequently resolve more than 70% of client questions without human intercession and spotlights comprehensively on AI client experience. Netomi is inconceivably simple to take on and has out-of-the-case reconciliations with all of the main specialist work area stages. The organization works with organizations giving different items and administrations across an assortment of businesses, including WestJet, Brex, Zinus, Singtel, Circles Life, WB Games, and HP.

atSpoke makes it simple for workers to get the knowledge they need. Its an interior tagging framework that has inherent AI. It permits interior groups (IT help work area, HR, and other business tasks groups) to appreciate 5x quicker goals by promptly noting 40% of solicitations naturally. The AI reacts to a scope of worker inquiries by surfacing information base substances. Workers can get refreshes straightforwardly inside the channels they are utilizing each day, including Slack, Google Drive, Confluence, and Microsoft Teams.

WP-Chatbot is the most well-known chatbot in the WordPress environment, giving a huge number of sites live talk and web visit abilities. WP-Chatbot incorporates a Facebook Business page and powers live and automated connections on a WordPress site through a local Messenger talk gadget. Theres a simple single tick establishment measure. It is probably the quickest method to add a live visit to a WordPress site. Clients have a solitary inbox for all messages regardless of whether occurring on Messenger or on webchat which gives a truly proficient approach to oversee cross-platform client collaborations.

The Microsoft Bot Framework is a thorough structure for building conversational AI encounters. The Bot framework composer is an open-source, visual authoring material for engineers and multi-disciplinary groups to plan and fabricate conversational encounters with language understanding, QnA maker, and bot answers. The Microsoft bot framework permits clients to utilize a far-reaching open-source SDK and apparatuses to effortlessly interface a bot to well-known channels and gadgets.

Do you want to interact with the 83.1 million peoplewho own a smart speaker? Amazon, which has captured 70% of this market, has the best AI chatbot software for voice assistants. With Alexa for Business, IT teams can create custom skills that can answer customer questions. The creation of custom skills is a trend that has exploded: Amazon grew from 130 skills to over100,000 skillsas of September 2019 in just over three years. Creating custom skills on Alexa allows your customers to ask questions, order or re-order products or services, or engage with other content spontaneously by simply speaking out loud. With Alexa for Business, teams can integrate with Salesforce, ServiceNow, or any other custom apps and services.

Zendesk works close by your help group inside Zendesk to answer approaching client questions immediately. The Answer Bot pulls pertinent articles from your Zendesk knowledge base to furnish clients with the data they need immediately. You can convey extra innovation on top of your Zendesk chatbot or you can allow the Zendesk To answer bot fly solo on your site talk, inside portable applications, or for inner groups on Slack.

CSML is the principal open-source programming language and chatbot engine committed to growing incredible and interoperable chatbots. CSML assists designers with building and conveys chatbots effectively with its expressive punctuation and its ability to interface with any outsider API. Utilized by a large number of chatbot designers, CSML studio is the least difficult approach, to begin with, CSML, with all that included beginning building chatbots straightforwardly inside your program. A free playground is additionally accessible to allow engineers to explore different avenues regarding the language without joining.

Dasha is a conversational AI as a service platform. It furnishes developers with devices to make human-like, profoundly conversational AI applications. The applications can be utilized for call center specialist substitution, text talk or to add conversational voice interfaces to versatile applications or IoT gadgets. Dasha was named a Gartner Cool Vendor in Conversational AI 2020.

No knowledge of AI or ML is needed to work with Dasha, any engineer with fundamental JavaScript knowledge will feel totally at ease.

SurveySparrow is a software product stage for conversational studies and structures. The stage groups consumer loyalty reviews (i.e., Net Promoter Score (NPS), Customer Satisfaction Score (CSAT) or Customer Effort Score (CES), and Employee Experience overviews (i.e., Recruitment and Pre-enlist, Employee 360 Assessment, Employee Check-in, and Employee Exit Interviews) devices. The conversational UI sends overviews in a talk-like encounter. This methodology expands overview culmination rates by 40%. SurveySparrow accompanies a scope of out-of-the-case question types and layouts. Reviews are inserted on sites or other programming instruments through incorporations with Zapier, Slack, Intercom, and Mailchimp.

One year from now, 2.4B individuals will utilize Facebook Messenger. ManyChat is an incredible alternative in case youre searching for a speedy method to dispatch a basic chatbot to sell items, book arrangements, send request updates, or share coupons on Facebook Messenger. It has industry-explicit formats, or you can fabricate your own with a simplified interface, which permits you to dispatch a bot inside the space of minutes without coding. You can without much of a stretch interface with eCommerce tools, including Shopify, PayPal, Stripe, ActiveCampaign, Google Sheets, and 1,500+ extra applications through Zapier and Integromat.

Link:
Top 10 Recent Chatbots to Make Note of in 2021 - Analytics Insight

How to Lead Through Burnout and Emerge More Resilient – WITN

eMindful Unveils New Programming and Resources to Address Burnout Head On

Published: Oct. 5, 2021 at 12:52 PM EDT|Updated: 2 hours ago

ORLANDO, Fla., Oct. 5, 2021 /PRNewswire/ -- Employees are leaving the workforce en masse and burnout is to blame. The devastation of the pandemic has taken a toll on employees with 77% reporting that they have experienced workplace burnout and more than 42% reporting symptoms of anxiety and depression up from 11% the previous year.

eMindful, the leading provider of evidence-based mindfulness programs for everyday moments and chronic conditions, regularly takes the pulse of our participants and provides programming and resources to address their needs in real-time. More than one-third of participants surveyed recently indicated that they are experiencing different types of burnout, including difficulty balancing time spent working versus not working or that their workload exceeds their capacity.

eMindful is addressing the crisis head on with the introduction of new programming and resources. This includes a Mindfulness-Based Cognitive Training program, which uses a cognitive-behavioral therapy approach with mindfulness to address burnout and prevent depression and relapse.

The program includes 16 expert-led, live, virtual mindfulness sessions and a four-hour group workshop and retreat to build community and support. Using an evidence-based approach, the teacher helps participants build self-compassion, foster positive feelings, thoughts, and behaviors, and manage feelings of overwhelm. The program also includes a click-to-call feature for participants who need immediate access to a mental health professional.

eMindful also is introducing a Leading Through Burnout collection with a live webinar and an on-demand series for leaders to recognize signs of burnout in themselves and their employees, learn strategies to relate to difficult emotions in new and positive ways, and create a pathway for an open dialogue with their staff around workload and mental health.

"Our burned-out workforce is the latest mental health casualty of the pandemic and leaders in particular are suffering," said Mary Pigatti, President, eMindful. "These resources will allow managers to build skills and learn strategies to lead through burnout and emerge more resilient."

The next MBCT program begins on Monday, Oct. 18. Organizations that are interested in bringing this program to their population or their clients' populations, can contact sales@emindful.com.

Media Contact:Zev SuissaeMindful772-569-4540zev@emindful.com

About eMindfuleMindful, a Wondr Health company, provides evidence-based, mindfulness programs for everyday life and chronic conditions by helping individuals make every moment matter.

View original content to download multimedia:

SOURCE eMindful

The above press release was provided courtesy of PRNewswire. The views, opinions and statements in the press release are not endorsed by Gray Media Group nor do they necessarily state or reflect those of Gray Media Group, Inc.

Here is the original post:
How to Lead Through Burnout and Emerge More Resilient - WITN