FIWARE and The European Data Spaces Alliance – ARC Viewpoints

ARC was recently briefed by Ulrich Ahle, CEO, Juan Jos Hierro, CTO, and Cristina Brandstetter, CMO of the FIWARE organization. This blog emphasizes the technical aspects and their implications, where we will mix introductory content with recent developments communicated during the briefing. The blog concludes with highlights of FIWAREs three-year strategy.

FIWARE started with a framework of open-source platform components that can be assembled with third-party platform-compliant components to accelerate the development of smart, IoT-enabled solutions. These solutions need to gather, process, manage context information and inform external actors or parties, enabling them to access and update the context and keep it current. Context information are entities, characterized by attributes that can have values, for example, an entity car, with attributes, for example, speed with value 100 and location with values representing geospatial coordinates.

The FIWARE Orion Context Broker component is the core component of any Powered by FIWARE solution. This component includes an information model that can be configured by the user without programming or borrowing from the FIWARE-led Smart Data Model initiative. The user can interact with the context broker via a REST API, according to the NGSIv2 or NGSI-LD standards issued by ETSI.

This flexible information context management can be used to build digital twins, those of the type describing the information associated with an asset. As the data structures can handle estimations of values of attributes in the future, these can be linked to simulations that can provide those estimations. This would be a structured approach of linking simulations with asset information. Nothing withholds the user from documenting the location and the versions of the simulator to obtain a complete and consistent document of static, predicted, and possibly historical asset information. We would not suggest using context information to store process historical data, however just as in the case of future data, links to those can be documented with the asset information. FIWARE can be used in any vertical and is most often used in smart cities and mobility, smart industry (which includes smart manufacturing, Industry 4.0), smart energy, smart agri-food, and smart water.

FIWARE Connectors include connectivity to sensors (for instance IoT agents connecting IoT sensors), field instruments (with agents using OPC-UA connectors), robots, and classical on-premises applications such as CMMS, SCM, or MES/MOM. Connectors further include the context broker, and optionally stream processing engines, and connectivity to cloud platforms and smart applications in the cloud. The connector thereby provides great flexibility in connectivity without requiring programming.

FIWARE is applicable to all verticals and is the leader for context management in smart cities worldwide. Significantly, after a few years of experimentation with smart city platforms, the Indian smart city program (IUDX) decided on a countrywide unified platform based on FIWARE for future smart city implementations to gain efficiencies and synergies.

In smart industry solutions, the FIWARE context broker can be used to synchronize information between edge and cloud; and decouple cloud applications or services that use subsets of the same information pool. FIWARE supports smart industry users by providing information on performance under high loads (high frequency, high volume industrial data), and implementation guidelines to optimize for these high loads. Other FIWARE enablers can provide additional open-source applications, such as WIRECLOUD for dashboarding.

The context broker can be used across companies or organizational boundaries, while guaranteeing the control of owners over their data, via identity and API management. FIWARE demonstrated the capability to implement IDS-compliant data space connectors in the past. Building on these components and this experience, FIWARE has recently published an architecture for FIWARE-enabled data spaces. However, different implementations of the IDS reference architecture may not always be interoperable. To stimulate the usage of data spaces for the exchange of information among companies, the four major organizations in Europe promoting data spaces, IDSA, FIWARE, GAIA-X, and the Big Data Value Association (BDVA), have created the Data Spaces Business Alliance this week, providing a common reference model and harmonizing technologies; supporting users with tools, resources, and expertise; identifying existing and future data spaces, and promoting best practices, such as the recently published Design Principles for Data Spaces.

FIWARE has the vision to become the global enabler for the Data Economy. The strategy to reach that vision has the following pillars:

Growing the FIWARE ecosystem, both in terms of users, members, and developers, and levering large corporate accounts. Growing the market readiness of the technology, by increasing functionality, performance, and quality of the components in the open-source portfolio. Focused support of vertical industry domains, in order of priority: smart cities and mobility, smart industry, smart energy, smart agri-food, and smart water. Globalization, through partnerships with existing global members, promoting the NGSI standard with NIST and leveraging the FIWARE iHubs.

ARC observes that the FIWARE open-source platform has increased in maturity, both in terms of technology readiness for smart industry applications and also as a globalizing organization. The market vision and technology concepts seem very sound and promising to us. We encourage users to interrogate companies and applied research organizations about their experience with FIWARE and determine how the platform can add value. Because FIWARE is an open-source platform, the cost of using the technology is limited to building knowledge and implementing applications, a considerable advantage.

See more here:
FIWARE and The European Data Spaces Alliance - ARC Viewpoints

Intel offers Loihi 2 to boffins: A 7nm chip with more than 1m programmable neurons – The Register

Robots with Intel Inside brains? That's what Chipzilla has in mind with its Loihi 2 neuromorphic chip, which tries to mimics the human brain.

This is Intel's stab at creating more intelligent computers that can efficiently discover patterns and associations in data and from that learn to make smarter and smarter decisions.

It's not a processor in the traditional sense, and it's aimed at experimentation rather than production use. As you can see from the technical brief [PDF], it consists of up to 128 cores that each have up to 8,192 components that act like natural spiking neurons that send messages to each other and form a neural network that tackles a particular problem.

The cores also implement the synapses that transmit information between the neurons, which can each send binary spikes (1s or 0s) or graded spikes (32-bit payload value).

An overview of the Loihi 2 chip architecture ... Source: Intel. Click to enlarge

Each neuron can be assigned a program written using a basic instruction set to perform a task, and the whole thing is directed by six normal CPU cores that run software written in, say, C. There's also external IO to communicate with other hardware, and interfaces to link multiple Loihi 2 chips together into a mesh as required. There are other features, such as three-factor learning rules, that you should see the technical brief for more details. The previous generation didn't have graded spikes nor programmable neurons.

The 'highlights' of the Loihi 2 per-neuron instruction set ... Source: Intel. Click to enlarge

There's a race to replicate the brain electronically to run powerful AI applications quickly and without making, say, a massive dent in the electric bill. Samsung just said it wants to put human-like brain structures on a chip. IBM is also developing hardware designed around the brain.

Intel's latest Loihi is ten times faster than the previous-generation component announced four years ago to this month, it is claimed.

"Loihi 2 is currently a research chip only," said Garrick Orchard, a research scientist at Intel Labs via email with The Register. "Its core-based architecture is scalable and could enable future flavors of the chip when the technology matures that could have a range of commercial applications spanning data center to edge devices."

Each Loihi 2 chip has potentially more than a million digital neurons, up from 128,000 in its predecessor. To put that in context, there are roughly 90 billion interconnected neurons in the human brain, which should give you an idea of the level of intelligence possible with this hardware right now.

The digital neurons compute asynchronously in parallel, and can be customized by their programming. Loihi 2 supports a maximum of 120 million synapses, compared to over a trillion synapses in the human brain, and has 2.3 billion transistors in a 31 mm2 die area. According to Intel, its digital circuits run "up to 5000x faster than biological neurons."

The chip is an early sample of the Intel 4 manufacturing node, which is the semiconductor giant's brand name for its much-delayed 7nm process node, and uses extreme ultraviolet (EUV) to etch the chips. Loihi 1 was made using a 14nm process.

"With Loihi 2 being fabricated with a pre-production version of the Intel 4 process, this underscores the health and progress of Intel 4," an Intel spokeswoman told us.

Intel, by the way, showed off the wafer of Intel 4 CPU family, code-named Meteor Lake and aimed at desktops and mobile PCs, at a press event in July, the first time it had done so. Chips using that microarchitecture are expected to ship in 2023, hence Loihi 2 is a glimpse of what's to come manufacturing-wise.

Intel is working with the research community to come up with applications for Loihi 2. Its predecessor was used to create systems to identify smells, manage robotic arms, and optimize railway scheduling.

There are no projects underway with Loihi 2 yet, though partners that worked with the original Loihi "have communicated their excitement for new capabilities within Loihi 2," Orchard said.

One such partner is America's Los Alamos National Laboratory, which is using the first-gen Loihi chip as an artificial brain to understand the benefits of sleep.

An open-source programming framework called Lava was introduced alongside Loihi 2 with which developers can write AI applications that can be implemented in the chip's neural network. The underlying tools will also support Robotic Operating System (ROS), TensorFlow, Pytorch and other frameworks.

The Lava framework is available for download on Github.

This neuromorphic hardware will be available to researchers via Intel's Neuromorphic Research Cloud. The available components includes the Oheo Gulch board, which includes a single-socket Loihi 2 linked to an FPGA. A system code-named Kapoho Point with eight Loihi 2 chips will be available soon.

Our friends over at The Next Platform have more analysis and info on Loihi 2 right here.

Excerpt from:
Intel offers Loihi 2 to boffins: A 7nm chip with more than 1m programmable neurons - The Register

Can Intels XPU vision guide the industry into an era of heterogeneous computing? – VentureBeat

This article is part of the Technology Insight series, made possible with funding from Intel.

As data sprawls out from the network core to the intelligent edge, increasingly diverse compute resources follow, balancing power, performance, and response time. Historically, graphics processors (GPUs) were the offload target of choice for data processing. Today field programmable gate arrays (FPGAs), vision processing units (VPUs), and application specific integrated circuits (ASICs) also bring unique strengths to the table. Intel refers to those accelerators (and anything else to which a CPU can send processing tasks) as XPUs.

The challenge software developers face is determining which XPU is best for their workload; arriving at an answer often involves lots of trial and error. Faced with a growing list of architecture-specific programming tools to support, Intel spearheaded a standards-based programming model called oneAPI to unify code across XPU types. Simplifying software development for XPUs cant happen soon enough. After all, the move to heterogeneous computingprocessing on the best XPU for a given applicationseems inevitable, given evolving use cases and the many devices vying to address them.

KEY POINTS

Intels strategy faces headwind from NVIDIAs incumbent CUDA platform, which assumes youre using NVIDIA graphics processors exclusively. That walled garden may not be as impenetrable as it once was. Intel already has a design win with its upcoming Xe-HPC GPU, code-named Ponte Vecchio. The Argonne National Laboratorys Aurora supercomputer, for example, will feature more than 9,000 nodes, each with six Xe-HPCs totaling more than 1 exa/FLOP/s of sustained DP performance.

Time will tell if Intel can deliver on its promise to streamline heterogenous programming with oneAPI, lowering the barrier to entry for hardware vendors and software developers alike. A compelling XPU roadmap certainly gives the industry a reason to look more closely.

The total volume of data spread between internal data centers, cloud repositories, third-party data centers, and remote locations is expected to increase by more than 42% from 2020 to 2022, according to The Seagate Rethink Data Survey. The value of that information depends on what you do with it, where, and when. Some data can be captured, classified, and stored to drive machine learning breakthroughs. Other applications require a real-time response.

The compute resources needed to satisfy those use cases look nothing alike. GPUs optimized for server platforms consume hundreds of watts each, while VPUs in the single-watt range might power smart cameras or computer vision-based AI appliances. In either example, a developer must decide on the best XPU for processing data as efficiently as possible. This isnt a new phenomenon. Rather, its an evolution of a decades-long trend toward heterogeneity, where applications can run control, data, and compute tasks on the hardware architecture best suited to each specific workload.

Transitioning to heterogeneity is inevitable for the same reasons we went from single core to multicore CPUs, says James Reinders, an engineer at Intel specializing in parallel computing. Its making our computers more capable, and able to solve more problems and do things they couldnt do in the past but within the constraints of hardware we can design and build.

As with the adoption of multicore processing, which forced developers to start thinking about their algorithms in terms of parallelism, the biggest obstacle to making computers more heterogenous today is the complexity of programming them.

It used to be that developers programmed close to the hardware using low-level languages, providing very little abstraction. The code was often fast and efficient, but not portable. These days, higher-level languages extend compatibility across a broader swathe of hardware while hiding a lot of unnecessary details. Compilers, runtimes, and libraries underneath the code make the hardware do what you want. It makes sense that were seeing more specialized architectures enabling new functionality through abstracted languages.

Even now, new accelerators require their own software stacks, gobbling up the hardware vendors time and money. From there, developers make their own investment into learning new tools so they can determine the best architecture for their application.

Instead of spending time rewriting and recompiling code using different libraries and SDKs, imagine an open, cross-architecture model that can be used to migrate between architectures without leaving performance on the table. Thats what Intel is proposing with its oneAPI initiative.

oneAPI supports high-level languages (Data Parallel C++, or DPC++), a set of APIs and libraries, and a hardware abstraction layer for low-level XPU access. On top of the open specification, Intel has its own suite of toolkits for various development tasks. The Base Toolkit, for example, includes the DPC++ compiler, a handful of libraries, a compatibility tool for migrating NVIDIA CUDA code to DPC++, the optimization oriented VTune profiler, and the Advisor analysis tool, which helps identify the best kernels to offload. Other toolkits home in on more specific segments, such as HPC, AI and machine learning acceleration, IoT, rendering, and deep learning inference.

When we talk about oneAPI at Intel, its a pretty simple concept, says Intels Reinders. I want as much as possible to be the same. Its not that theres one API for everything. Rather, if I want to do fast Fourier transforms, I want to learn the interface for an FFT library, then I want to use that same interface for all my XPUs.

Intel isnt putting its clout behind oneAPI for purely selfless reasons. The company already has a rich portfolio of XPUs that stand to benefit from a unified programming model (in addition to the host processors tasked with commanding them). If each XPU was treated as an island, the industry would end up stuck where it was before oneAPI: with independent software ecosystems, marketing resources, and training for each architecture. By making as much common as possible, developers can spend more time innovating and less time reinventing the wheel.

An enormous number of FLOP/s, or floating-point operations per second, come from GPUs. NVIDIAs CUDA is the dominant platform for general purpose GPU computing, and it assumes youre using NVIDIA hardware. Because CUDA is the incumbent technology, developers are reluctant to change software that already works, even if theyd prefer more hardware choice.

If Intel wants the community to look beyond proprietary lock-in, it needs to build a better mousetrap than its competition, and that starts with compelling GPU hardware. At its recent Architecture Day 2021, Intel disclosed that a pre-production implementation of its Xe-HPC architecture is already producing more than 45 TFLOPS of FP32 throughput, more than 5 TB/s of fabric bandwidth, and more than 2 TB/s of memory bandwidth. At least on paper, thats higher single-precision performance than NVIDIAs fastest data center processor.

The world of XPUs is more than just GPUs though, which is exhilarating and terrifying, depending on who you ask. Supported by an open, standards-based programming model, a panoply of architectures might enable time-to-market advantages, dramatically lower power consumption, or workload-specific optimizations. But without oneAPI (or something like it), developers are stuck learning new tools for every accelerator, stymying innovation and overwhelming programmers.

Fortunately, were seeing signs of life beyond NVIDIAs closed platform. As an example, the team responsible for RIKENs Fugaku supercomputer recently used Intels oneAPI Deep Neural Network Library (oneDNN) as a reference to develop its own deep learning process library. Fugaku employs Fujitsu A64FX CPUs, based on Armv8-A with the Scalable Vector Extension (SVE) instruction set, which didnt have a DL library yet. Optimizing Intels code for Armv8-A processors enabled an up to 400x speed-up compared to simply recompiling oneDNN without modification. Incorporating those changes into the librarys main branch makes the teams gains available to other developers.

Intels Reinders acknowledges the whole thing sounds a lot like open source. However, the XPU philosophy goes a step further, affecting the way code is written so that its ready for different types of accelerators running underneath it. Im not worried that this is some type of fad, he says. Its one of the next major steps in computing. It is not a question of whether an idea like oneAPI will happen, but rather when it will happen.

Continue reading here:
Can Intels XPU vision guide the industry into an era of heterogeneous computing? - VentureBeat

Top 10 Recent Chatbots to Make Note of in 2021 – Analytics Insight

Chatbots are utilized by 1.4 billion individuals today. Organizations are dispatching their best AI chatbots to carry on 1:1 discussions with clients and workers. Artificial intelligence-fuelled chatbots are likewise equipped for automating different assignments, including sales and marketing, client assistance, and operational tasks.

As the interest for chatbot software has soared to peaks, the commercial center of organizations that give chatbot innovation has become more earnestly to explore around with many organizations promising to do exactly the same thing. Nonetheless, not all AI chatbots are similar.

To help organizations of all sizes and sectors track down the best of the best, weve gathered together the best 10 most recent chatbots for explicit business use cases in various sectors:

Netomis AI platform assists companies with consequently settling client support tickets on email, talk, informing, and voice. It has the most elevated precision of any client care chatbot because of its high-level natural language understanding (NLU) motor. It can consequently resolve more than 70% of client questions without human intercession and spotlights comprehensively on AI client experience. Netomi is inconceivably simple to take on and has out-of-the-case reconciliations with all of the main specialist work area stages. The organization works with organizations giving different items and administrations across an assortment of businesses, including WestJet, Brex, Zinus, Singtel, Circles Life, WB Games, and HP.

atSpoke makes it simple for workers to get the knowledge they need. Its an interior tagging framework that has inherent AI. It permits interior groups (IT help work area, HR, and other business tasks groups) to appreciate 5x quicker goals by promptly noting 40% of solicitations naturally. The AI reacts to a scope of worker inquiries by surfacing information base substances. Workers can get refreshes straightforwardly inside the channels they are utilizing each day, including Slack, Google Drive, Confluence, and Microsoft Teams.

WP-Chatbot is the most well-known chatbot in the WordPress environment, giving a huge number of sites live talk and web visit abilities. WP-Chatbot incorporates a Facebook Business page and powers live and automated connections on a WordPress site through a local Messenger talk gadget. Theres a simple single tick establishment measure. It is probably the quickest method to add a live visit to a WordPress site. Clients have a solitary inbox for all messages regardless of whether occurring on Messenger or on webchat which gives a truly proficient approach to oversee cross-platform client collaborations.

The Microsoft Bot Framework is a thorough structure for building conversational AI encounters. The Bot framework composer is an open-source, visual authoring material for engineers and multi-disciplinary groups to plan and fabricate conversational encounters with language understanding, QnA maker, and bot answers. The Microsoft bot framework permits clients to utilize a far-reaching open-source SDK and apparatuses to effortlessly interface a bot to well-known channels and gadgets.

Do you want to interact with the 83.1 million peoplewho own a smart speaker? Amazon, which has captured 70% of this market, has the best AI chatbot software for voice assistants. With Alexa for Business, IT teams can create custom skills that can answer customer questions. The creation of custom skills is a trend that has exploded: Amazon grew from 130 skills to over100,000 skillsas of September 2019 in just over three years. Creating custom skills on Alexa allows your customers to ask questions, order or re-order products or services, or engage with other content spontaneously by simply speaking out loud. With Alexa for Business, teams can integrate with Salesforce, ServiceNow, or any other custom apps and services.

Zendesk works close by your help group inside Zendesk to answer approaching client questions immediately. The Answer Bot pulls pertinent articles from your Zendesk knowledge base to furnish clients with the data they need immediately. You can convey extra innovation on top of your Zendesk chatbot or you can allow the Zendesk To answer bot fly solo on your site talk, inside portable applications, or for inner groups on Slack.

CSML is the principal open-source programming language and chatbot engine committed to growing incredible and interoperable chatbots. CSML assists designers with building and conveys chatbots effectively with its expressive punctuation and its ability to interface with any outsider API. Utilized by a large number of chatbot designers, CSML studio is the least difficult approach, to begin with, CSML, with all that included beginning building chatbots straightforwardly inside your program. A free playground is additionally accessible to allow engineers to explore different avenues regarding the language without joining.

Dasha is a conversational AI as a service platform. It furnishes developers with devices to make human-like, profoundly conversational AI applications. The applications can be utilized for call center specialist substitution, text talk or to add conversational voice interfaces to versatile applications or IoT gadgets. Dasha was named a Gartner Cool Vendor in Conversational AI 2020.

No knowledge of AI or ML is needed to work with Dasha, any engineer with fundamental JavaScript knowledge will feel totally at ease.

SurveySparrow is a software product stage for conversational studies and structures. The stage groups consumer loyalty reviews (i.e., Net Promoter Score (NPS), Customer Satisfaction Score (CSAT) or Customer Effort Score (CES), and Employee Experience overviews (i.e., Recruitment and Pre-enlist, Employee 360 Assessment, Employee Check-in, and Employee Exit Interviews) devices. The conversational UI sends overviews in a talk-like encounter. This methodology expands overview culmination rates by 40%. SurveySparrow accompanies a scope of out-of-the-case question types and layouts. Reviews are inserted on sites or other programming instruments through incorporations with Zapier, Slack, Intercom, and Mailchimp.

One year from now, 2.4B individuals will utilize Facebook Messenger. ManyChat is an incredible alternative in case youre searching for a speedy method to dispatch a basic chatbot to sell items, book arrangements, send request updates, or share coupons on Facebook Messenger. It has industry-explicit formats, or you can fabricate your own with a simplified interface, which permits you to dispatch a bot inside the space of minutes without coding. You can without much of a stretch interface with eCommerce tools, including Shopify, PayPal, Stripe, ActiveCampaign, Google Sheets, and 1,500+ extra applications through Zapier and Integromat.

Link:
Top 10 Recent Chatbots to Make Note of in 2021 - Analytics Insight

Twitter gave researchers for Defense Department access to info for use in government programs – Washington Times

Twitter granted researchers working for the Defense Department access to information shared by its users for study on combating online influence operations, a defense research program manager says.

Brian Kettler, a program manager at the Defense Advanced Research Projects Agency (DARPA), said unintentional collection of Americans data is possible.

However, there are processes in place to protect Americans personally identifiable information (PII) and prevent their data from being used by government-funded researchers, he added.

DARPA-funded researchers are following Twitters terms and conditions, Mr. Kettler said.

We have an, the performer thats in charge of data provisioning has an agreement with Twitter, Mr. Kettler said. Twitter has sort of, they scrutinized how were using the information, what information were accessing, so its all going through the platform Ts and Cs with a specific agreement.

Two programs run by Mr. Kettler involve Twitter, namely the Social Simulation for Evaluating Online Messaging Campaigns (SocialSim) program and the Influence Campaign Awareness and Sensemaking (INCAS) program.

Mr. Kettler said that researchers on the SocialSim program have worked with Twitter and that those on the INCAS program intend to collect data from Twitter and plan to build algorithms to analyze data for foreign influence.

You may inadvertently collect information on U.S. persons, Mr. Kettler said. When we do that, we have procedures in place and policies to deal with that so that we are not intentionally collecting information on U.S. persons and if we unintentionally collect it thats sequestered and not used. And we have a stringent policy in place in general to protect PII information that were collecting as well so were not looking at personal details of individuals.

Mr. Kettler said the government is following Twitters rules to ensure it complies with the data it accesses on Twitter.

For example, on Twitter, when somebody deletes a tweet, you have to delete the tweet in your database so its all going through the standard public ways you can procure data from these platforms, he said. Were not scraping stuff, were working through the platforms.

Scraping on social media websites refers to the use of automated software tools to access and extract or copy data from public profiles. The practice often runs afoul of tech platforms rules, but that does not stop hackers or foes from countries like China from scraping anyway.

Last year, American researcher Christopher Balding and Australian cybersecurity firm Internet 2.0 discovered that Chinese technology company Shenzhen Zhenua Data had surveilled the social media profiles of tens of thousands of Americans and collected information on peoples whereabouts and their personal and professional relationships.

Lawmakers have since debated several proposals on how to better protect the Americans data, including its export to foreign adversaries.

Twitter declined to comment for this report, including about what information the government-funded researchers are accessing.

Twitters privacy policy encourages users to think about what they decide to make public.

You are responsible for your Tweets and other information you provide through our services, and you should think carefully about what you make public, especially if it is sensitive information, reads Twitters website. If you update your public information on Twitter, such as by deleting a Tweet or deactivating your account, we will reflect your updated content on Twitter.com, Twitter for iOS, and Twitter for Android. By publicly posting content, you are directing us to disclose that information as broadly as possible, including through our [application programming interfaces], and directing those accessing the information through our APIs to do the same.

Asked whether Twitter users were made aware that DARPA researchers looked at their tweets, Mr. Kettler said Twitter treats DARPA researchers the same as anyone else it gives access.

Before the governments selection of researchers for the INCAS program, DARPA required them to detail how they would control the collection of Americans data and apply Human Subjects Research Controls if they needed to do so, according to an October 2020 DARPA presentation.

The information accessed by the government-funded researchers looks to be tweets made available to the public.

A broad agency announcement about the INCAS program from October 2020 said INCAS will primarily use publicly available data sources including multilingual, multi-platform social media, online news sources, and online reference data sources.

We hope that ultimately the tools that we develop to better understand the online information environment we can make open source and platforms can take advantage of but were not working with any platforms other than procuring their data through standard channels, Mr. Kettler said.

Read the original:
Twitter gave researchers for Defense Department access to info for use in government programs - Washington Times

How to Lead Through Burnout and Emerge More Resilient – WITN

eMindful Unveils New Programming and Resources to Address Burnout Head On

Published: Oct. 5, 2021 at 12:52 PM EDT|Updated: 2 hours ago

ORLANDO, Fla., Oct. 5, 2021 /PRNewswire/ -- Employees are leaving the workforce en masse and burnout is to blame. The devastation of the pandemic has taken a toll on employees with 77% reporting that they have experienced workplace burnout and more than 42% reporting symptoms of anxiety and depression up from 11% the previous year.

eMindful, the leading provider of evidence-based mindfulness programs for everyday moments and chronic conditions, regularly takes the pulse of our participants and provides programming and resources to address their needs in real-time. More than one-third of participants surveyed recently indicated that they are experiencing different types of burnout, including difficulty balancing time spent working versus not working or that their workload exceeds their capacity.

eMindful is addressing the crisis head on with the introduction of new programming and resources. This includes a Mindfulness-Based Cognitive Training program, which uses a cognitive-behavioral therapy approach with mindfulness to address burnout and prevent depression and relapse.

The program includes 16 expert-led, live, virtual mindfulness sessions and a four-hour group workshop and retreat to build community and support. Using an evidence-based approach, the teacher helps participants build self-compassion, foster positive feelings, thoughts, and behaviors, and manage feelings of overwhelm. The program also includes a click-to-call feature for participants who need immediate access to a mental health professional.

eMindful also is introducing a Leading Through Burnout collection with a live webinar and an on-demand series for leaders to recognize signs of burnout in themselves and their employees, learn strategies to relate to difficult emotions in new and positive ways, and create a pathway for an open dialogue with their staff around workload and mental health.

"Our burned-out workforce is the latest mental health casualty of the pandemic and leaders in particular are suffering," said Mary Pigatti, President, eMindful. "These resources will allow managers to build skills and learn strategies to lead through burnout and emerge more resilient."

The next MBCT program begins on Monday, Oct. 18. Organizations that are interested in bringing this program to their population or their clients' populations, can contact sales@emindful.com.

Media Contact:Zev SuissaeMindful772-569-4540zev@emindful.com

About eMindfuleMindful, a Wondr Health company, provides evidence-based, mindfulness programs for everyday life and chronic conditions by helping individuals make every moment matter.

View original content to download multimedia:

SOURCE eMindful

The above press release was provided courtesy of PRNewswire. The views, opinions and statements in the press release are not endorsed by Gray Media Group nor do they necessarily state or reflect those of Gray Media Group, Inc.

Here is the original post:
How to Lead Through Burnout and Emerge More Resilient - WITN

Mac or Windows: 6 Steps to Converting WebP to JPG Bulk – Programming Insider

To sign up for our daily email newsletter, CLICK HERE

When analyzing image formats, you probably realized that most of them use either lossy or lossless compression, resulting in differences in file size and color integrity. However, Webp is a bit different.

In WebP, both lossy and lossless compression are used to reduce the size of the image while maintaining a relatively high quality. As opposed to JPEG, WebP files require Google Chrome or another WebP viewer to be opened, it is not as web-friendly as JPEG. Thus, we may need to convert WebPs to JPEG at some point.

It is possible to solve this issue both online and offline using WebP to JPG converters, but we often require something more! This page lists six methods for bulk converting WebPs to JPGs on both Mac and Windows systems.

Top Bulk WebP to JPG Converter 2021 (Windows & Mac)

We tested NCH Softwares Pixelillion Image Converter on macOS and Windows, and it converted WebP images flawlessly on both platforms. Its ability to read/write and convert an array of formats, along with the quality of its output, make it a favorite of many users.

Pixillion Image Converter possesses the following features:

Here is How to Batch Convert WebP to JPG

Software to convert WebP to JPG in bulk (Mac, Windows)

1. Mac Preview

Preview is the best option for converting WebP to JPG on Mac and for free. It reads WebP files and converts them to JPGs without requiring additional software installation. Although it does convert WebP images, the workflow is not as efficient as one created by a professional WebP converter.

For Mac users, here is how to bulk convert WebP to JPG.

It is XnConvert, an open-source program for opening, editing, and converting WebP and other 500+ images on Mac, Windows, even Linux.

You can edit images using the basic and advanced editing tools in this free image converter software. You have complete control over the output preferences, even if its not visually stylish. During the converting process, there may also be a loss of resolution.

XnConvert provides a free way to bulk convert WebP to JPG.

Online batch conversion from WebP to JPEG

In my experience, I would never use web-based tools cluttered with advertisements, as this could invite malware, ransom programs, or lead you directly to some harmful websites. So, when selecting a WebP to JPG converter online for free, I prioritize security, and pay attention to conversion quality. Check out the following list now.

My personal use of Google Drive led me to find one day that cloudconvert is a recommend document reader for Google Docs, which is also why cloudconvert made this list.

You can be sure that its totally ad-free.

Its Features

Free online batch conversion service:

Without doubt, Zamzar is one of the best online free choices to convert WebP images, 510 million files since 2016 well explains why so many users finally pick this tool.

One of Zamzars biggest improvements spanning these years is that there is no need to submit email address to download the target files anymore, just saves us from spams in the mailbox.

The Features

Here is How to Batch Convert WebP to JPG Online Free

3. Anywebp

Using AnyWebP, all image details are preserved without affecting image quality, resolution, or clarity. You can also convert files in batches. AnyWebP is the only method that supports batch conversion and is also featured onAnd whats more cool?You dont even need to upload your images to the server which makes this tool to be the best among others that uploads your pictures to server first posing a privacy concern among many people.

And these are the reasons why Anywebp is on the top list.

The Features

Follow these steps to batch convert WebP to JPG online.

See detailed review at Trendstorys.

Conclusions

Whether you choose a professional WebP to JPG converter, an online image converter or a free download image converter, all three of these types will convert your WebP image. A professional WebP to JPG converter will save you a lot of time, preserve the original quality, and help you save money as well.

Read the original here:
Mac or Windows: 6 Steps to Converting WebP to JPG Bulk - Programming Insider

Trading apps work to get living people to hear your problem – Floridanewstimes.com

This is one of the drawbacks of apps that make it easy to order food, buy stocks and cryptocurrencies, and more. What if something goes wrong?

Tapping a menu after a menu in the hope of contacting someone to solve the problem is often a frustrating chase. It is also increasingly acknowledged by start-ups that ruin the investment and trading industry.

Robin Hood, an app that helps more than 22 million people trade stocks and cryptocurrencies, announced on Tuesday that it will provide customers with 24/7 phone support to cover almost any issue. ..Follow-up of announcements Coinbase A cryptocurrency trading platform that announced last month that it will launch a 24/7 telephone service for many customers by the end of the year.

Robin Hood cites limited customer support concerns as one of the challenges before the stock first opened on the open market. Earlier this year, Robin Hood also settled a tort proceeding in a 20-year-old family. He claims to have committed suicide after receiving only an automatically generated reply to an email about the negative balance of about $ 730,000 in his account.

Reaching Robin Hoods early customer support meant communicating primarily via email, but more live phone support has been added in recent months.

Gretchen Howard, Chief Operating Officer of Robinhood Market Inc., said: From March 2020 to June 2021, the number of customer support workers is close to 2,700.

With so many first-time investors forming its foundation, many of Robin Hoods customer questions are about setting up a bank account or filing a first tax return. However, demand can fluctuate significantly from day to day.

If someone makes a famous tweet about cryptography, our cryptographic volume can grow tenfold instantly, Howard said.

Customers logged into the Robin Hood app can now request a callback from their contacts. Through this process, the app will also try to help customers solve the problem themselves, if possible.Based in Menlo Park California Were still working on how to provide live phone services to customers who cant log in to their account.

William Van Horn II, 30 years old Pensacola Florida Robin Hoods customer service has already been experienced several times. He is not always happy.

He once said he had mistakenly deposited $ 1,000 instead of $ 100 in his account. Shortly thereafter, he sent an email to customer service hoping to cancel the deposit. He finally greeted a representative over the phone who tried to guide him through some steps. But Van Horn said he couldnt cancel the $ 1,000 deposit, or at least get back an additional $ 900.

Van Horn has other complaints about Robinhoods customer service, but it wasnt enough to stop using the app.

There is a lack of customer service, he said. But the interface is still pretty good when it comes to mobile use.

The rest is here:
Trading apps work to get living people to hear your problem - Floridanewstimes.com

What is encryption? – PC World New Zealand

If you've read anything about technology in the last few years, you may have seen the term encryption floating around. It's a simple concept, but the realities of its use are enormously complicated. If you need a quick 101 on what encryption is and how it's used on modern devices, you've come to the right place. But first, we have to start at the beginning.

At the most simple, basic level, encryption is a way to mask information so that it can't be immediately accessed. Encryption has been used for thousands of years, long before the rise of the information age, to protect sensitive or valuable knowledge. The use and study of encryption, codes, and other means of protecting or hiding information is called cryptography.

The most simple version of encryption is a basic replacement cipher. If you use numbers to indicate letters in the Latin alphabet, A=1, B=2, et cetera, you can send a message as that code. It isn't immediately recognizable, but anyone who knows the code can quickly decipher the message. So, a seemingly random string of numbers:

20 8 5 16 1 19 19 23 15 18 4 9 19 19 23 15 18 4 6 9 19 8

can become vital information, to someone who knows how to read it.

t he p a s s w o r d i s s w o r d f i s h

That's an incredibly basic example, the kind of thing you might find in the classic decoder ring toy. Archaeologists have found examples of people encrypting written information that are thousands of years old: Mesopotamian potters sent each other coded messages in clay, telling their friends how to make a new glaze without letting their competitors know. A set of Greek substitutions called the Polybus square is another example, requiring a key to unlock the message. It was still being used in the Middle Ages.

Cryptography is used to protect information, and there's no more vital application than warfare. Militaries have encrypted their messages to make sure that enemies won't know their plans if communication is intercepted. Likewise, militaries also try to break encryption, discover the pattern to a code without having the original key. Both have greatly advanced the field of cryptography.

Take a look at World War II for two illustrative examples of practical encryption. The German military used a physical electronic device called an Enigma machine which could encode and decode messages with incredible complexity, allowing for fast and secret communication. But through a combination of finding rotating daily codes and advanced analysis, the Allies were able to break the encryption of the Enigma machines. They gained a decisive military advantage, listening to encrypted German radio messages and accessing their true contents.

But an encryption code doesn't necessarily have to be based on complex mathematics. For their own secret radio communications, the American military would use Native American code talkers, soldiers who used their native languages like Comanche and Navajo. Speaking to each other in these languages, both in plain speech and in basic word-to-letter cipher codes, the code talkers could communicate orders and other information via radio. The German, Italian, and Japanese militaries could easily intercept these transmissions, but having no access to any Native American speakers, this relatively simple method of encryption was unbreakable.

In the modern world, encryption is done almost exclusively via computers. Instead of encrypting each word or letter with another, or even following a pattern to do so, electronic encryption scrambles individual bits of data in a randomized fashion and scrambles the key as well. Decrypting just a tiny bit of this information by hand, even if you had the correct key, would take more than a lifetime.

With the rapid computation available in the electronic world, data encrypted digitally is more or less impossible to crack by conventional means. For example, the ones and zeros (bits) that make up the digital contents of a file encoded on the common 128-bit Advanced Encryption Standard are scrambled around ten different times in a semi-random pattern. For another computer to rearrange them back in the correct order, without the key, it would take so long that the sun would burn out before it was cracked. And that's the weakest version of AES: it also comes in 192- and 256-bit key sizes!

Every major modern operating system includes at least some tools for encrypting your data: Windows, MacOS, iOS, Android, and Linux. The Bitlocker system in Windows is one example. To a greater or lesser degree, you can encrypt all of your data so it requires a key to unlock. The same is true for online file storage, and your personal information stored in other secure locations, like your bank.

To access encrypted information, you can use one of three different types of keys. In computer security, these are referred to as something you know, (a password or PIN), something you have, (a physical encryption key like Yubico), and something you are (biometric authentication, like a fingerprint or face scan).

Encrypting the storage of your devices protects them in purely electronic terms: without one of those unlock methods, it's incredibly difficult bordering on impossible for anyone to access your data. The extra processing it takes to encrypt and decrypt data can make computer storage perform more slowly, but modern software can help minimize this speed reduction.

Of course if your password, or your physical key, or your fingerprint can be accessed by someone else, they can get to that data. That's why it's a good idea to use extra security methods. A common two-factor authentication system (2FA) uses both a password (something you know) and a text message sent to your phone (something you have) to log in. That gives an extra layer of security to any information stored in that system. Using a password manager to create unique passwords for each site or service you use adds even more protection, preventing hackers from reusing your login information if they do manage to pilfer your credentials for a given service.

Encrypting data doesn't mean it's absolutely impossible to access improperly. There are always weaknesses and ways around security. But using even basic encryption tools can help protect your data far beyond what's available by default.

More here:
What is encryption? - PC World New Zealand

Mathematics, Technology, and Economics – EurekAlert

image:Cover for "Blockchain and Distributed Ledgers: Mathematics, Technology, and Economics" view more

Credit: World Scientific

The Global Financial Crisis (GFC) has demonstrated that the existing banking and payment system, while still working, is outdated and struggling to support the continually changing requirements of the modern world. It would be an understatement to say that the GFC turned into a wasted opportunity to reorganize the world financial ecosystem. In their new book, Blockchain and Distributed Ledgers: Mathematics, Technology, and Economics, Alexander Lipton (co-founder and CIO of Sila) and Adrien Trecanni (founder and CEO of Metaco) argue that the seemingly squandered opportunities to reshape the financial system are not all lost. More specifically, they show that, if used deliberately, new technologies, including blockchains and distributed ledgers, can create new business models. New technologies will put pressure on the incumbents. More importantly, they will allow newly formed fintech companies to enter the market in earnest, thus providing considerable benefits to the general public.

This book concentrates on distributed ledger technology (DLT) and its potential impact on society. This technology, which became extremely popular over the last decade, allows us to solve many complicated problems arising in economics, banking, and finance, industry, trade, and many other fields. DLT develops new mechanisms for distributed consensus, using advanced tools from cryptography, game theory, economics, finance, scientific computing, etc. It offers an optimal and elegant solution in many situations, provided that it can overcome some of its inherent limitations and is used appropriately.

While strong mathematical skills are not required, readers should learn the necessary background materials from the book itself. The book clearly and accessibly explains and articulates rather sophisticated ideas underpinning DLT, so that it is accessible to anyone with a modicum of understanding of computer science, mathematics, and economics.

Throughout the book, the authors use their considerable practical experience to skilfully guide the reader through complexities, nuances, achievements, and promises of blockchains and distributed ledgers. The book is self-contained and provides all the necessary theoretical background for the reader to understand how DLT operates in both theory and practice and, if the need occurs, build a simple distributed ledger from scratch. It can serve as a primary textbook for a course on DLT and crypto-economics and a supplementary text for courses on economics, finance, cryptography, and others.

Blockchain and Distributed Ledgers: Mathematics, Technology, and Economics retails for US$68 / 60 (paperback) and US$138 / 120 (hardcover) and is also available in electronic formats. To order or know more about the book, visit http://www.worldscientific.com/worldscibooks/10.1142/11857. ###

About the Authors

Alexander Lipton is Co-Founder and Chief Information Officer of Sila, Partner at Numeraire, Visiting Professor and Deans Fellow at the Hebrew University of Jerusalem, and Connection Science Fellow at MIT. Alex is a board member of Sila and an advisory board member of several fintech companies worldwide. In 20062016, Alex was Co- Head of the Global Quantitative Group and Quantitative Solutions Executive at Bank of America. Earlier, he was a senior manager at Citadel, Credit Suisse, Deutsche Bank, and Bankers Trust. At the same time, Alex held visiting professorships at EPFL, NYU, Oxford University, Imperial College, and the University of Illinois. Before becoming a banker, Alex was a Full Professor of Mathematics at the University of Illinois and a Consultant at Los Alamos National Laboratory. In 2000 Alex was awarded the Inaugural Quant of the Year Award and in 2021 the Buy-side Quant of the Year Award by Risk Magazine. Alex authored/edited 10 other books and more than a hundred scientific papers. Alex is an Associate Editor of several journals, including Finance and Stochastics, Journal of FinTech, International Journal of Theoretical and Applied Finance, and Quantitative Finance. He is a frequent keynote speaker at Quantitative Finance and FinTech conferences and forums worldwide.

Adrian Treccani is founder and CEO of METACO, a leading provider of security infrastructure for digital assets, and a software engineer specialized in high performance computing and financial engineering. He has been an active member of the fintech community since 2012 and advised numerous banks and financial institutions globally on distributed ledger technology, cryptocurrencies and decentralized finance. Adrien lectures at University of Lausanne and Ecole Polytechnique Fdrale de Lausanne and has published in top peer-reviewed journals including Management Science and the Journal of Financial Econometrics. Adrian holds a Bachelor degree in computer science and a Master degree in financial engineering at the Ecole Polytechnique Fdrale de Lausanne. He obtained a PhD in mathematical finance at the Swiss Finance Institute and completed a post doctorate in high performance computing at University of Zrich. He worked in the hedge fund industry as a quantitative analyst before founding METACO in 2015.

About World Scientific Publishing Co.

World Scientific Publishing is a leading international independent publisher of books and journals for the scholarly, research and professional communities. World Scientific collaborates with prestigious organisations like the Nobel Foundation and US National Academies Press to bring high quality academic and professional content to researchers and academics worldwide. The company publishes about 600 books and over 140 journals in various fields annually. To find out more about World Scientific, please visit http://www.worldscientific.com.

For more information, contact WSPC Communications at communications@wspc.com.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Go here to see the original:
Mathematics, Technology, and Economics - EurekAlert