Addressing the gender bias in artificial intelligence and automation – OpenGlobalRights

Geralt/Pixabay

Twenty-five years after the adoption of the Beijing Declaration and Platform for Action, significant gender bias in existing social norms remains. For example, as recently as February 2020, the Indian Supreme Court had to remind the Indian government that its arguments for denying women command positions in the Army were based on stereotypes. And gender bias is not merely a male problem: a recent UNDP report entitled Tackling Social Norms found that about 90% of people (both men and women) hold some bias against women.

Gender bias and various forms of discrimination against women and girls pervades all spheres of life. Womens equal access to science and information technology is no exception. While the challenges posed by the digital divide and under-representation of women in STEM (science, technology, engineering and mathematics) continue, artificial intelligence (AI) and automation are throwing newer challenges to achieving substantive gender equality in the era of the Fourth Industrial Revolution.

If AI and automation are not developed and applied in a gender-responsive way, they are likely to reproduce and reinforce existing gender stereotypes and discriminatory social norms. In fact, this may already be happening (un)consciously. Let us consider a few examples:

Despite the potential for such gender bias, the growing crop of AI standards do not adequately integrate a gender perspective. For example, the Montreal Declaration for the Responsible Development of Artificial Intelligence does not make an explicit reference to integrating a gender perspective, while the AI4Peoples Ethical Framework for a Good AI Society mentions diversity/gender only once. Both the OECD Council Recommendation on AI and the G20 AI Principles stress the importance of AI contributing to reducing gender inequality, but provide no details on how this could be achieved.

The Responsible Machine Learning Principles do embrace bias evaluation as one of the principles. This siloed approach of embracing gender is also adopted by companies like Google and Microsoft, whose AI Principles underscore the need to avoid creating or reinforcing unfair bias and to treat all people fairly, respectively. Companies related to AI and automation should adopt a gender-response approach across all principles to overcome inherent gender bias. Google should, for example, embed a gender perspective in assessing which new technologies are socially beneficial or how AI systems are built and tested for safety.

What should be done to address the gender bias in AI and automation? The gender framework for the UN Guiding Principles on Business and Human Rights could provide practical guidance to states, companies and other actors. The framework involves a three-step cycle: gender-responsive assessment, gender-transformative measures and gender-transformative remedies. The assessment should be able to respond to differentiated, intersectional, and disproportionate adverse impacts on womens human rights. The consequent measures and remedies should be transformative in that they should be capable of bringing change to patriarchal norms, unequal power relations. and gender stereotyping.

States, companies and other actors can take several concrete steps. First, women should be active participantsrather than mere passive beneficiariesin creating AI and automation. Women and their experiences should be adequately integrated in all steps related to design, development and application of AI and automation. In addition to proactively hiring more women at all levels, AI and automation companies should engage gender experts and womens organisations from the outset in conducting human rights due diligence.

Second, the data that informs algorithms, AI and automation should be sex-disaggregated, otherwise the experiences of women will not inform these technological tools and in turn might continue to internalise existing gender biases against women. Moreover, even data related to women should be guarded against any inherent gender bias.

Third, states, companies and universities should plan for and invest in building capacity of women to achieve smooth transition to AI and automation. This would require vocational/technical training at both education and work levels.

Fourth, AI and automation should be designed to overcome gender discrimination and patriarchal social norms. In other words, these technologies should be employed to address challenges faced by women such as unpaid care work, gender pay gap, cyber bullying, gender-based violence and sexual harassment, trafficking, breach of sexual and reproductive rights, and under-representation in leadership positions. Similarly, the power of AI and automation should be employed to enhance womens access to finance, higher education and flexible work opportunities.

Fifth, special steps should be taken to make women aware of their human rights and the impact of AI and automation on their rights. Similar measures are needed to ensure that remedial mechanismsboth judicial and non-judicialare responsive to gender bias, discrimination, patriarchal power structures, and asymmetries of information and resources.

Sixth, states and companies should keep in mind the intersectional dimensions of gender discrimination, otherwise their responses, despite good intentions, will fall short of using AI and automation to accomplish gender equality. Low-income women, single mothers, women of colour, migrant women, women with disability, and non-heterosexual women all may be affected differently by AI and automation and would have differentiated needs or expectations.

Finally, all standards related to AI and automation should integrate a gender perspective in a holistic manner, rather than treating gender as merely a bias issue to be managed.

Technologies are rarely gender neutral in practice. If AI and automation continue to ignore womens experiences or to leave women behind, everyone will be worse off.

Visit link:
Addressing the gender bias in artificial intelligence and automation - OpenGlobalRights

Banking and payments predictions 2020: Artificial intelligence – Verdict

Artificial intelligence (AI) refers to software-based systems that use data inputs to make decisions on their own. Machine learning is an application of AI that gives computer systems the ability to learn and improve from data without being explicitly programmed.

2019 saw financial institutions explore a broad-range of possible AI use cases in both customer-facing and back-office processes, increasing budgets, headcounts, and partnerships. 2020 will see increased focus on breaking out the marketing story from actual business impact to place bigger bets in fewer areas. This will help banks scale proven AI across the enterprise to forge competitive advantage.

Artificial intelligence will re-invigorate digital money management, helping incumbents drip-feed highly personalised spending tips to build trust and engagement in the absence of in-person interaction. Features like predictive insights around cashflow shortfalls, alerts on upcoming bill payments, and various what if scenarios when trying on different financial products give customers transparency around their options and the risks they face. This service will render as an always-on, in-your-pocket, and predictive advisor.

AI-enhanced customer relationship management (CRM) will help digital banks optimise product recommendations to rival the conversion rates of best-in-class online retailers. These product suggestions wont render as sales, but rather valuable advice received, such as a pre-approved loan before a cash shortfall or an option to remortgage to fund home improvements. This will help incumbents build customer advocacy and trust as new entrants vie for attention.

AI-powered onboarding, when combined with voice and facial recognition technologies, will help incumbents make themselves much easier to do business with, especially at the initial point of conversion but also thereafter at each moment of authentication. AI will offer particular support through Know Your Customer (KYC) processes, helping incumbents keep pace with new entrants. Standard Bank in South Africa, for example, used WorkFusions AI capabilities to reduce the customer onboarding time from 20 days to just five minutes.

Banks heavy compliance burden will continue to drive AI. Last year, large global banks such as OCBC Bank, Commonwealth Bank, Wells Fargo, and HSBC made big investments in areas such as automated data management, reporting, anti-money laundering (AML), compliance, automated regulation interpretation, and mapping. Increasingly partnering with artificial intelligence-enabled regtech firms will help incumbents reduce operational risk and enhance reporting quality.

As artificial intelligence becomes more embedded into all areas of customers lives, concerns around the black box driving decisions will grow, with more demands for explainable AI. As it is, customers with little or no digital footprint are less visible to applications that rely on data to profile people and assess risk. Traditional banks credit risk algorithms often disproportionately exclude black and Hispanic groups in the US as well as women, because these groups have historically earned less over their lifetimes.

In 2020, senior management will be held directly accountable for the decisions of AI-enabled algorithms. This will drive increased focus on data quality to feed the algorithms and perhaps limits to the use of the most dynamic machine learning because of their regulatory opacity.

This is an edited extract from the Banking & Payments Predictions 2020 Thematic Research report produced by GlobalData Thematic Research.

GlobalData is this websites parent business intelligence company.

See original here:
Banking and payments predictions 2020: Artificial intelligence - Verdict

How Artificial Intelligence is helping the fight against COVID-19 – Health Europa

The Artificial Intelligence (AI) tool has been shown to accurately predict which patients that have been newly infected with the COVID-19 virus would go on to develop severe respiratory disease.

Named SARS-CoV-2, the new novel coronavirus, as of March 30, had infected 735,560 patients worldwide. According to the World Health Organization, the illness has caused more than 34,830 deaths to date, more often among older patients with underlying health conditions.

The study, published in the journalComputers, Materials & Continua, was led by NYU Grossman School of Medicine and the Courant Institute of Mathematical Sciences at New York University, in partnership with Wenzhou Central Hospital and Cangnan Peoples Hospital, both in Wenzhou, China.

The study has revealed the best indicators of future severity and found that they were not as expected.

Corresponding author Megan Coffee, clinical assistant professor in the Division of Infectious Disease & Immunology at NYU Grossman School of Medicine, said: While work remains to further validate our model, it holds promise as another tool to predict the patients most vulnerable to the virus, but only in support of physicians hard-won clinical experience in treating viral infections.

Our goal was to design and deploy a decision-support tool using AI capabilities mostly predictive analytics to flag future clinical coronavirus severity, says co-author Anasse Bari, PhD, a clinical assistant professor in Computer Science at the Courant institute. We hope that the tool, when fully developed, will be useful to physicians as they assess which moderately ill patients really need beds, and who can safely go home, with hospital resources stretched thin.

For the study, demographic, laboratory, and radiological findings were collected from 53 patients as each tested positive in January 2020 for COVID-19 at the two Chinese hospitals. In a minority of patients, severe symptoms developed with a week, including pneumonia.

The researchers wanted to find out whether AI techniques could help to accurately predict which patients with the virus would go on to develop Acute Respiratory Distress Syndrome or ARDS, the fluid build-up in the lungs that can be fatal in the elderly.

To do this they designed computer models that make decisions based on the data fed into them, with programmes getting smarter the more data they consider. Specifically, the current study used decision trees that track series of decisions between options, and that model the potential consequences of choices at each step in a pathway.

The AI tool found that changes in three features levels of the liver enzyme alanine aminotransferase (ALT), reported myalgia, and haemoglobin levels were most accurately predictive of subsequent, severe disease. Together with other factors, the team reported being able to predict risk of ARDS with up to 80% accuracy.

ALT levels, which rise dramatically as diseases like hepatitis damage the liver, were only a bit higher in patients with COVID-19, but still featured prominently in prediction of severity. In addition, deep muscle aches (myalgia) were also more commonplace and have been linked by past research to higher general inflammation in the body.

Lastly, higher levels of haemoglobin, the iron-containing protein that enables blood cells to carry oxygen to bodily tissues, were also linked to later respiratory distress. Could this be explained by other factors, like unreported smoking of tobacco, which has long been linked to increased haemoglobin levels?

Of the 33 patients at Wenzhou Central Hospital interviewed on smoking status, the two who reported having smoked, also reported that they had quit.

Limitations of the study, say the authors, included the relatively small data set and the limited clinical severity of disease in the population studied.

I will be paying more attention in my clinical practice to our data points, watching patients closer if they for instance complain of severe myalgia, adds Coffee. Its exciting to be able to share data with the field in real time when it can be useful. In all past epidemics, journal papers only published well after the infections had waned.

Read more here:
How Artificial Intelligence is helping the fight against COVID-19 - Health Europa

Spending in Artificial Intelligence to accelerate across the public sector due to automation and social distancing compliance needs in response to…

April 9, 2020 - LONDON, UK: Prior to the COVID-19 pandemic, the IDC (International Data Corporation) Worldwide Artificial Intelligence Spending Guide had forecast European artificial intelligence (AI) spending of $10 billion for 2020, and a healthy growth at a 33% CAGR throughout 2023. With the COVID-19 outbreak, IDC expects a variety of changes in spending in 2020. AI solutions deployed in the cloud will experience a strong uptake, showing that companies are looking at deploying intelligence in the cloud to be more efficient and agile.

"Following the COVID-19 outbreak, many industries such as transportation and personal and consumer services will be forced to revise their technology investments downwards," said Andrea Minonne, senior research analyst at IDC Customer Insights & Analysis. "On the other hand, AI is a technology that can play a significant role in helping businesses and societies deal with and solve large scale disruption caused by quarantines and lockdowns. Of all industries, the public sector will experience an acceleration of AI investments. Hospitals are looking at AI to speed up COVID-19 diagnosis and testing and to provide automated remote consultations to patients in self-isolation through chatbots. At the same time, governments will use AI to assess social distancing compliance"

In the IDC report, What is the Impact of COVID-19 on the European IT Market? (IDC #EUR146175020, April 2020) we assessed the impact of COVID-19 across 181 European companies and found that, as of March 23, 16% of European companies believe automation through AI and other emerging technologies can help them minimize the impact of COVID-19. With large scale lockdowns in place, a shortage of workers and supply chain disruptions will drive automation needs across manufacturing.

Applying intelligence to automate processes is a crucial response to the COVID-19 crisis. Not only does automation allow European companies to digitally transform, but also to make prompt data-driven decisions and have a positive impact on business efficiency. IDC expects a surge in adoption of automated COVID-19 diagnosis in healthcare to speed up diagnosis and save time for both doctors and patients. As the virus spreads quickly, labor shortages in industries where product demand is surging can become a critical problem. For that reason, companies are renovating their hiring processes, applying a mix of intelligent automation and virtualization in their hiring processes. Companies will also aim to automate their supply chains, maintain their agility and avoid production bottlenecks, especially for industries with vast supplier networks. With customer service centers becoming severely restricted, automation will be a crucial part for remote customer engagement and chatbots will help customers in self-isolation get the support they need without having to wait a long time.

"As a short-term response to the COVID-19 crisis, AI can play a crucial part in automating processes and limiting human involvement to a necessary minimum," said Petr Vojtisek, research analyst at IDC Customer Insights & Analysis. "In the longer term, we might observe an increase in AI adoption for companies that otherwise wouldn't consider it, both for competitive and practical reasons."

IDC's Worldwide Semiannual Artificial Intelligence Spending Guide provides guidance on the expected technology opportunity around the AI market across nine regions. Segmented by 32 countries, 19 industries, 27 use cases, and 6 technologies, the guide provides IT vendors with insight into this rapidly growing market and how the market will develop over the coming years.

For IDCs European coverage of COVID-19, click here.

Read the original:
Spending in Artificial Intelligence to accelerate across the public sector due to automation and social distancing compliance needs in response to...

CORRECTION – Labelbox Awarded Artificial Intelligence Contract by Department of Defense – Yahoo Finance

Leading provider of training data platforms for machine learning, Labelbox receives prestigious SBIR contract from AFWERX for U.S. Air Force

SAN FRANCISCO, April 09, 2020 (GLOBE NEWSWIRE) -- This release for Labelbox corrects and replaces the release issued today at 7:00 am ET with the headline Labelbox Awarded Artificial Intelligence Grant by Department of Defense. The word grant has been replaced in the headline, subheadline, and release body with the word contract. The corrected release follows.

Labelbox, the worlds leading training data platform, is among an elite selection of artificial intelligence companies to receive a contract from the Department of Defense to support national security as the U.S. scrambles to stay ahead of its rivals.

While some in Silicon Valley balk at working with the government, Labelboxs founders are vocal about their belief that technology companies have a responsibility to help the U.S. maintain its technological advantage in the face of competition from nation states.

I grew up in a poor family, with limited opportunities and little infrastructure said Manu Sharma, CEO and one of Labelboxs co-founders, who was raised in a village in India near the Himalayas. He said that opportunities afforded by the U.S. have helped him achieve more success in ten years than multiple generations of his family back home. Weve made a principled decision to work with the government and support the American system, he said.

Labelbox is a software platform that allows data science teams to manage the data used to train supervised-learning models. Supervised learning is a branch of artificial intelligence that uses labeled data to train algorithms to recognize patterns in images, audio, video or text. After being fed millions of labeled pictures of mobile missile launchers from satellite imagery, for example, a supervised-learning system will learn to pick out missile launchers in pictures it has never seen.

For data science teams to work better with each other and with labelers around the world, they need a platform and tools. Without those things, managing large sets of data quickly becomes overwhelming. Labelbox solves that problem by facilitating collaboration, rework, quality assurance, model evaluation, audit trails, and model-assisted labeling in one platform. The platform is tailored for computer vision systems but can handle all forms of data. The platform also helps with billing and time management.

Labelbox is an integrated solution for data science teams to not only create the training data but also to manage it in one place, said Sharma. Its the foundational infrastructure for customers to build their machine learning pipeline.

The company won an Air Force AEFWRX Phase 1 Small Business Innovation Research contract to conduct feasibility studies on how to integrate the Labelbox platform with various stakeholders in the Air Force. Labelbox recently hired a representative in Washington, D.C., to manage the process.

The Small Business Innovation Research (SBIR) program is a highly competitive program that encourages domestic small businesses to engage in Federal Research and Development. The United States Department of Defense is the largest of 11 federal agencies participating in the program. Air Force Innovation Hub Network (AFWERX) is a United States Air Force program intended to engage innovators and entrepreneurs in developing effective solutions to challenges faced by the service.

About LabelboxFounded in 2018 and based in San Francisco, Labelbox is a collaborative training data platform for machine learning applications. Instead of building their own expensive and incomplete homegrown tools, companies rely on Labelbox as the training data platform that acts as a central hub for data science teams to interface with dispersed labeling teams. Better ways to input and manage data translates into higher-quality training data and more accurate machine-learning models. Labelbox has raised $39 million in capital from leading VCs in Silicon Valley. For more information, visit: https://www.Labelbox.com/

Editorial ContactLonn Johnston for Labelbox+1 650.219.7764lonn@flak42.com

Original post:
CORRECTION - Labelbox Awarded Artificial Intelligence Contract by Department of Defense - Yahoo Finance

Neuromorphic Chips: The Third Wave of Artificial Intelligence – Analytics Insight

The age of traditional computers is reaching its limit. Without innovations taking place, it is difficult to move past the technology threshold. Hence it is necessary to bring major design transformation with improved performance that can change the way we view computers. The Moores law (named after Gordon Moore, in 1965) states that the number of transistors in a dense integrated circuit doubles about every two years while their price halves. But now the law is losing its validity. Hence hardware and software experts have come up with two solutions: Quantum Computing and Neuromorphic Computing. While quantum computing has made major strides, neuromorphic is still in its lab stage, until recently when Intel announced its neuromorphic chip, Loihi. This may indicate the third wave of Artificial Intelligence.

The first generation of AI was marked with defining rules and emulated classical logic to draw reasoned conclusions within a specific, narrowly defined problem domain. It was well suited to monitoring processes and improving efficiency, for example. The second generation was populated by using deep learning networks to analyze the contents and data that were largely concerned with sensing and perception. The third generation is about drawing parallels to the human thought process, like interpretation and autonomous adaptation. In short, it mimics neurons spiking like the nervous system of humans. It relies on densely connected transistors that mimic the activity of ion channels. This allows them to integrate memory, computation, and communication, at higher speed, complexity, and better energy efficiency.

Loihi is Intels fifth-generation neuromorphic chip. This 14-nanometer chip has a 60-millimeter die size and contains over 2 billion transistors, as well as three managing Lakemont cores for orchestration. It contains a programmable microcode engine for on-chip training of asynchronous spiking neural networks (SNNs). Total, it has 128 cores packs. Each core has a built-in learning module and a total of around 131,000 computational neurons that communicate with one another, allowing the chip to understand stimuli. On March 16, Intel and Cornell University showcased a new system, demonstrating the ability of this chip to learn and recognize 10 hazardous materials from the smell. And this can function even in the presence of data noise and occlusion. According to their joint profiled paper in Nature Machine Intelligence, this can be used to detect the presence of explosives, narcotics, polymers and other harmful substances like signs of smoke, carbon monoxide, etc. It can purportedly do this faster, more accurate than sniffer dogs thereby threatening to replace them. They achieved this by training it constructing a circuit diagram of biological olfaction. They drew this insight by creating a dataset by exposing ten hazardous chemicals (including acetone, ammonia, and methane) through a wind tunnel, and a set consisting of the activity of 72 chemical sensors collected the signals.

This tech has multifold applications like identifying harmful substances in the airport, detecting the presence of diseases and toxic fumes in the air. The best part is, it constantly re-wires its internal network to allow different types of learning. The futuristic version can transform traditional computers into machines that can learn from experience and make cognitive decisions. Hence it is adaptive like human senses. And to put a cherry on top, it uses a fraction of energy than the current state of art systems in vogue. It is predicted to displace Graphics Processing Units (GPUs).

Although Loihi may soon evolve into a household word, it is not the only one. The neuromorphic approach is being investigated by IBM, HPE, MIT, Purdue, Stanford, and others. IBM is in the race with its TrueNorth. It has 4096 cores, each having 256 neurons and each neuron having 256 synapses to communicate with others. Germanys Jlich Research Centres Institute of Neuroscience and Medicine and UKs Advanced Processor Technologies Group at the University of Manchester are working on a low-grade supercomputer called SpiNNaker. It stands for Spiking Neural Network Architecture. It is believed to stimulate so-called cortical microcircuits, hence the human brain cortex and help us understand complex diseases like Alzheimers.

Who knows what sort of computational trends we may foresee in the coming years. But one thing is sure, the team at Analytics Insight will keep a close watch on it.

More here:
Neuromorphic Chips: The Third Wave of Artificial Intelligence - Analytics Insight

Artificial intelligence to be added to class 11 curriculum in India – Khaleej Times

The Central Board of Secondary Education (CBSE) will introduce Design Thinking, Physical Activity Trainer and Artificial Intelligence as new subjects for class 11 from the 2020-21 academic year, officials have revealed.

To make the new generation more creative, innovative and physically fit, and to keep pace with global developments and requirements in the workplace, the board is introducing the three new subjects, said Biswajit Saha, Director Training and Skill Education, CBSE.

"While thinking is a skill that all humans possess, the 21st century's requirement is of critical thinking and problem-solving. Design Thinking is a systematic process of thinking that opens up the horizons of creativity and enables even the most conditioned thinkers to bring about new and innovative solutions to the problems at hand," he said.

According to Saha, the course on Physical Activity Trainer will not only help in developing skills of a trainer but also a life skill.

"Artificial Intelligence is also a simulation by machines of the unlimited thinking capacity of humans. Physical Activity is a must if the body and mind are to be kept healthy.

"With this view in mind, the course on Physical Activity Trainer has been prepared. It will not only help in developing the skill of a trainer, but will also become a life skill as it will imbibe the idea of keeping fit for life," he added.

Originally posted here:
Artificial intelligence to be added to class 11 curriculum in India - Khaleej Times

Cryptocurrency Market Update: Bitcoin and gold toying with a massive selloff, is $1,000 in the picture? – FXStreet

Bitcoin price managed to stay above $7,200 support in the wake of rejection from levels under $7,500. The most traded cryptocurrency has stepped above $7,300 but is currently struggling with the resistance at $7,400. Across the cryptocurrency market, bears appear to be taking over control. All the top three cryptoassets are slightly in the red. Ethereum is trading marginally below the opening value at $173.31 while Ripple is down 0.88% to trade at $0.20.

A cryptocurrency trader and analyst on Twitter Henrik Zeberg is not afraid to openly speak of Bitcoins possible dive to $1,000. Zeberg is choosing to remain bearish in spite of Bitcoin price recovery from levels around $3,864 (reached in March) to highs close to $7,500 (earlier this week). Using the chart below, the trader points out that Bitcoin is vulnerable at $7,200.

Alongside the gold, the worlds most precious metal, Bitcoin is likely to fall into another selloff. Zeberg says that Bitcoin and gold are so misunderstood at this point! We have strong illiquid phase in front of us.

According to the daily chart, Bitcoin upside is limited by the 50-day SMA. Movement above $7,400 (tipping point) could push the price above $7,500. This is likely to shift the focus back to $8,000.

However, in relation to Zebergs bearish prediction, a bearish pennant patent puts Bitcoin in grave danger of breaking down to retest the support at $6,000 or even $5,000.

Meanwhile, short term analysis shows Bitcoin is likely to embrace consolidation as long as the RSI keeps on with the leveling motion between 50 and 60.

View post:
Cryptocurrency Market Update: Bitcoin and gold toying with a massive selloff, is $1,000 in the picture? - FXStreet

Reddit is finally adding cryptocurrency, screenshots show – Decrypt

Discussion forum Reddit appears to be building a points system that runs on a blockchain, according to a video posted by one Redditor.

Redditor MagoCrypto, who according to his profile is a community manager at Unstoppable Domainswhich builds censorship resistant websitesuploaded the video yesterday. It appears to show a beta implementation of the system.

Screenshots show a wallet, which comes with a blockchain address. Image: Reddit.

"I opened my app yesterday morning and saw the 'wallet' menu option. Went through it, saw 'blockchain' and got super excited to share with rest of y'all," MagoCrypto wrote on Reddit.

A Reddit spokesperson told Decrypt that it is testing a blockchain-based feature but only for one community.

"We continuously experiment with ways to support communities on Reddit. In this instance, were working with one community to test a feature that represents a users involvement in a community. We value and seek out community feedback as we continue to explore features that engage our users and communities, the spokesperson said.

According to the screenshots, the main feature is a cryptocurrency wallet where the user can collect points. But each user will have their own blockchain address where they will be able to see their points.

The FAQ gives some more clues as to what's coming. Image: Reddit.

It appears that these points will have some value. In the frequently asked questions, there is a section called "Tipping and Transfers." This suggests that the points can be sent to other Redditors and will have value, probably as cryptocurrency. It is unclear whether these points can be sent outside of the Reddit app.

It is possible the points will have futher functionalities in the Reddit ecosystem. Other questions involve memberships and voting, implying that they could be used to contribute to Reddit's development.

Reddit has long been a popular source of information for cryptocurrency projects. It is home to many crypto communities, with r/Bitcoin boasting 1.4 million members and r/Cryptocurrency with 994,000. But so far, it has held off on implementing a blockchain-based system. Maybe the crypto-loving Redditors have good karma after all.

Go here to see the original:
Reddit is finally adding cryptocurrency, screenshots show - Decrypt

Cryptocurrency Market Update: Bitcoin Cash rallies ahead of halving, Bitcoin stable above $7,200, ETH and XRP in the green – FXStreet

The cryptocurrency market is being treated to a couple of halving events this week. Bitcoin Cash and its rival sibling Bitcoin SV will both undergo a mining reward halving. Halving is an event that reduces the reward miners get per block of coins mined. Bitcoin Cash halving is its first since it hard forked from Bitcoin in 2017. It is scheduled to take place on Wednesday and will have mining rewards slashed in half from 12.5 BCH to 6.25 BCH. On the other hand, Bitcoin SV halving will take place a proximately a day after that of BCH.

BCH/USD has surged 8% on the day as investors take their positions ahead of the mining. It is exchanging hands at $274 after advancing from $252 (opening value). An intraday high has been reached at $280. However, buyers eye $300 while riding on the speculation surrounding the halving event.

Bitcoin price has made a considerable movement above $7,000 this week. The price stepped above $7,400 on Tuesday but lost steam short of $7,500. At the time of writing, BTC is trading at $7,330 following an intraday growth of 1.77%. Immediate support has been established above $7,200, further cementing the buyers position on the market as they look forward to testing the level at $8,000.

Ethereum has also been in a bullish phase this week. The price action took a positive turn on breaking above $140. The rally above $160 9 (former resistance) allowed the improved sentiments towards Ether to improve. This catapulted Ethereum to test $180 resistance. For now, the price trading at $171 after adding 3.91% to its value on the day.

Ripple price is trading 3.77% higher on the day. The price movement has been bullish from the opening value at $0.1928 to $0.2001 (market value). The step above $0.20 is key to the next rally eyeing $0.30. Therefore, it is essential that bulls find support above this level and shift their focus to $0.30.

Follow this link:
Cryptocurrency Market Update: Bitcoin Cash rallies ahead of halving, Bitcoin stable above $7,200, ETH and XRP in the green - FXStreet