Application to The application to the AIMS African Masters of Machine Intelligence (AMMI) is open! – Uganda

KIGALI, Rwanda, 3 August 2022,-/African Media Agency (AMA)/-

Why AMMI?

The African Masters of Machine Intelligence (AMMI) will prepare well-rounded Machine Intelligence (MI) researchers by focusing on basic research in MI and developing a vast array of applications that respond to both the present and future needs of Africa and the world. AMMI graduates will go on to create and/or join the best industrial and public R&D labs in Africa and beyond, strengthening the African MI community and the scientific community at large, achieving crucial breakthroughs for the global good.

ADMISSION REQUIREMENTS

Basic Requirements

The minimum admission requirements are:

For more detailed information on acceptance documents, please contact theAMMI Admissions Office

Note: Your Original degrees, diplomas, academic certificates and transcripts should be in your possession upon arrival on Campus.

Academic History

Required transcripts records: Upload your official transcripts with your application. These include transcripts from every post-secondary institution attended, including summer sessions and extension programs. All academic records that are not originally in English or French should be issued in their original language and accompanied by English-certified translations.

References

At least three references are required. Be sure to inform your recommenders that they will be reached to provide a recommendation letter on your behalf. Your recommenders are asked to give their impressions of your intellectual ability, aptitude in research or professional skills, character, and previous work quality.

Personal Statement

The personal statement should tell us about yourself, your academic achievements, aspirations and other important accomplishments (i.e., projects, online courses, awards, etc.) related to AI and Machine Learning. It should also paint a picture of your academic aspirations, including post-masters. (500 words max)

Background Summary

Note: Prospective students applying for AIMS are welcome to apply for AMMI.

Application Deadline:31 August 2022.

Distributed byAfrican Media Agency (AMA)on behalf of TheAfrican Institute for Mathematical Sciences (AIMS).

The post Application toThe application to the AIMS African Masters of Machine Intelligence (AMMI) is open! appeared first on African Media Agency.

Source : African Media Agency (AMA)

Related

Excerpt from:
Application to The application to the AIMS African Masters of Machine Intelligence (AMMI) is open! - Uganda

U.S. Army Research Lab Expands Artificial Intelligence and Machine Learning Contract with Palantir for $99.9M – Business Wire

DENVER--(BUSINESS WIRE)--Palantir Technologies Inc. (NYSE: PLTR) today announced that it will expand its work with the U.S. Army Research Laboratory to implement data and artificial intelligence (AI)/machine learning (ML) capabilities for users across the combatant commands (COCOMs). The contract totals $99.9 million over two years.

Palantir first partnered with the Army Research Lab to provide those on the frontlines with state-of-the-art operational data and AI capabilities in 2018. Palantirs platform has supported the integration, management, and deployment of relevant data and AI model training to all of the Armed Services, COCOMs, and special operators. This extension grows Palantirs operational RDT&E work to more users globally.

Maintaining a leading edge through technology is foundational to our mission and partnership with the Army Research Laboratory, said Akash Jain, President of Palantir USG. Our nations armed forces require best-in-class software to fulfill their missions today while rapidly iterating on the capabilities they will need for tomorrows fight. We are honored to support this critical work by teaming up to deliver the most advanced operational AI capabilities available with dozens of commercial and public sector partners.

By working with the U.S. Army Research Lab, integrating with partner vendors, and iterating with users on the front lines, Palantirs software platforms will continue to quickly implement advanced AI capabilities against some of DODs most pressing problem sets. Were looking forward to fielding our newest ML, Edge, and Space technologies alongside our U.S. military partners, said Shannon Clark, Senior Vice President of Innovation, Federal. These technologies will enable operators in the field to leverage AI insights to make decisions across many fused domains. From outer space to the sea floor, and everything in between.

About Palantir Technologies Inc.

Foundational software of tomorrow. Delivered today. Additional information is available at https://www.palantir.com.

Forward-Looking Statements

This press release contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. These statements may relate to, but are not limited to, Palantirs expectations regarding the amount and the terms of the contract and the expected benefits of our software platforms. Forward-looking statements are inherently subject to risks and uncertainties, some of which cannot be predicted or quantified. Forward-looking statements are based on information available at the time those statements are made and were based on current expectations as well as the beliefs and assumptions of management as of that time with respect to future events. These statements are subject to risks and uncertainties, many of which involve factors or circumstances that are beyond our control. These risks and uncertainties include our ability to meet the unique needs of our customer; the failure of our platforms to satisfy our customer or perform as desired; the frequency or severity of any software and implementation errors; our platforms reliability; and our customers ability to modify or terminate the contract. Additional information regarding these and other risks and uncertainties is included in the filings we make with the Securities and Exchange Commission from time to time. Except as required by law, we do not undertake any obligation to publicly update or revise any forward-looking statement, whether as a result of new information, future developments, or otherwise.

See the rest here:
U.S. Army Research Lab Expands Artificial Intelligence and Machine Learning Contract with Palantir for $99.9M - Business Wire

Researchers Partner With NIH and Google to Develop AI Learning Modules – University of Arkansas Newswire

Photo by University Relations

Data science researchers will build cloud-based learning modules for biomedical research.

FAYETTEVILLE, Ark. With supplemental funding from the National Institutes of Health, a team of researchers led by Justin Zhan, professor of data science at the University of Arkansas, will collaborate with NIH and Google software engineers to build cloud-based learning modules for biomedical research.

These modules will help educate biomedical researchers on the ways that artificial intelligence and machine learning, both rapidly becoming important tools in biomedical research, can enhance and streamline data analysis for different types of medical and scientific images.

The new funding, $140,135, has been awarded through the National Institute of General Medical Sciences Institutional Development Award Program. Zhan partnered with Kyle Quinn, associate professor of biomedical engineering, and Larry Cornett, director of the Arkansas IDeA Network of Biomedical Research Excellence at the University of Arkansas for Medical Sciences, which is administering the grant.

In addition to the Arkansas IDeA Networks support, case studies for the learning modules will be developed with support from the data science and the imaging and spectroscopy cores of the Arkansas Integrative Metabolic Research Center.

Big data is transforming health and biomedical science, Zhan said. The new technology is rapidly expanding the quantity and variety of imaging modalities, for example, which can tell doctors so much more about their patients. But this transformation has created challenges, particularly with storing and managing massive data sets. Also, while the big data revolution transforms biology and medicine into data-driven sciences, traditional education is responding slowly. Addressing this shortcoming is part of what were trying to do.

The researchers will secure the technical expertise and resources needed to provide training to students and health-care professionals on the use of artificial intelligence and machine learning, as they apply to biomedical research.

Artificial intelligence is the ability of computer systems to perform tasks that have traditionally required human intelligence. One example of artificial intelligence is machine learning, in which algorithms and computations become more accurate than humans at predicting outcomes. This process demands tremendous computational power, more than standard computer clusters can handle.

The Arkansas researchers will parter with software engineers at Google and the National Institute of General Medical Sciences to address the computational requirements of artificial intellegence-driven research through the use of cloud computing. Cloud computing provides access to computing services over the internet, allowing faster and more flexible solutions in biomedical research.

The cloud computing modules developed by Zhans team will help researchers understand how artificial intelligence can be used in biomedical sciences to analyze big data. Case studies involving the identification of unique features in large biomedical image sets and the prediction of disease states is expected to help scientists, researchers and clinicians understand how to implement these powerful tools in their work.

About the Arkansas Integrative Metabolic Research Center: Established by a $10.8 million NIH grant in 2021, the Arkansas Integrative Metabolic Research Center focuses on the role of cell and tissue metabolism in disease, development, and repair through research involving advanced imaging, bioenergetics and data science. Quinn is the center director, and Zhan directs centers Data Science Core.

About the University of Arkansas: As Arkansas' flagship institution, the UofA provides an internationally competitive education in more than 200 academic programs. Founded in 1871, the UofA contributes more than $2.2 billion to Arkansas economy through the teaching of new knowledge and skills, entrepreneurship and job development, discovery through research and creative activity while also providing training for professional disciplines. The Carnegie Foundation classifies the UofA among the few U.S. colleges and universities with the highest level of research activity. U.S. News & World Report ranks the UofA among the top public universities in the nation. See how the UofA works to build a better world at Arkansas Research News.

Read the rest here:
Researchers Partner With NIH and Google to Develop AI Learning Modules - University of Arkansas Newswire

Can artificial intelligence really help us talk to the animals? – The Guardian

A dolphin handler makes the signal for together with her hands, followed by create. The two trained dolphins disappear underwater, exchange sounds and then emerge, flip on to their backs and lift their tails. They have devised a new trick of their own and performed it in tandem, just as requested. It doesnt prove that theres language, says Aza Raskin. But it certainly makes a lot of sense that, if they had access to a rich, symbolic way of communicating, that would make this task much easier.

Raskin is the co-founder and president of Earth Species Project (ESP), a California non-profit group with a bold ambition: to decode non-human communication using a form of artificial intelligence (AI) called machine learning, and make all the knowhow publicly available, thereby deepening our connection with other living species and helping to protect them. A 1970 album of whale song galvanised the movement that led to commercial whaling being banned. What could a Google Translate for the animal kingdom spawn?

The organisation, founded in 2017 with the help of major donors such as LinkedIn co-founder Reid Hoffman, published its first scientific paper last December. The goal is to unlock communication within our lifetimes. The end we are working towards is, can we decode animal communication, discover non-human language, says Raskin. Along the way and equally important is that we are developing technology that supports biologists and conservation now.

Understanding animal vocalisations has long been the subject of human fascination and study. Various primates give alarm calls that differ according to predator; dolphins address one another with signature whistles; and some songbirds can take elements of their calls and rearrange them to communicate different messages. But most experts stop short of calling it a language, as no animal communication meets all the criteria.

Until recently, decoding has mostly relied on painstaking observation. But interest has burgeoned in applying machine learning to deal with the huge amounts of data that can now be collected by modern animal-borne sensors. People are starting to use it, says Elodie Briefer, an associate professor at the University of Copenhagen who studies vocal communication in mammals and birds. But we dont really understand yet how much we can do.

Briefer co-developed an algorithm that analyses pig grunts to tell whether the animal is experiencing a positive or negative emotion. Another, called DeepSqueak, judges whether rodents are in a stressed state based on their ultrasonic calls. A further initiative Project CETI (which stands for the Cetacean Translation Initiative) plans to use machine learning to translate the communication of sperm whales.

Yet ESP says its approach is different, because it is not focused on decoding the communication of one species, but all of them. While Raskin acknowledges there will be a higher likelihood of rich, symbolic communication among social animals for example primates, whales and dolphins the goal is to develop tools that could be applied to the entire animal kingdom. Were species agnostic, says Raskin. The tools we develop can work across all of biology, from worms to whales.

The motivating intuition for ESP, says Raskin, is work that has shown that machine learning can be used to translate between different, sometimes distant human languages without the need for any prior knowledge.

This process starts with the development of an algorithm to represent words in a physical space. In this many-dimensional geometric representation, the distance and direction between points (words) describes how they meaningfully relate to each other (their semantic relationship). For example, king has a relationship to man with the same distance and direction that woman has to queen. (The mapping is not done by knowing what the words mean but by looking, for example, at how often they occur near each other.)

It was later noticed that these shapes are similar for different languages. And then, in 2017, two groups of researchers working independently found a technique that made it possible to achieve translation by aligning the shapes. To get from English to Urdu, align their shapes and find the point in Urdu closest to the words point in English. You can translate most words decently well, says Raskin.

ESPs aspiration is to create these kinds of representations of animal communication working on both individual species and many species at once and then explore questions such as whether there is overlap with the universal human shape. We dont know how animals experience the world, says Raskin, but there are emotions, for example grief and joy, it seems some share with us and may well communicate about with others in their species. I dont know which will be the more incredible the parts where the shapes overlap and we can directly communicate or translate, or the parts where we cant.

He adds that animals dont only communicate vocally. Bees, for example, let others know of a flowers location via a waggle dance. There will be a need to translate across different modes of communication too.

The goal is like going to the moon, acknowledges Raskin, but the idea also isnt to get there all at once. Rather, ESPs roadmap involves solving a series of smaller problems necessary for the bigger picture to be realised. This should see the development of general tools that can help researchers trying to apply AI to unlock the secrets of species under study.

For example, ESP recently published a paper (and shared its code) on the so called cocktail party problem in animal communication, in which it is difficult to discern which individual in a group of the same animals is vocalising in a noisy social environment.

To our knowledge, no one has done this end-to-end detangling [of animal sound] before, says Raskin. The AI-based model developed by ESP, which was tried on dolphin signature whistles, macaque coo calls and bat vocalisations, worked best when the calls came from individuals that the model had been trained on; but with larger datasets it was able to disentangle mixtures of calls from animals not in the training cohort.

Another project involves using AI to generate novel animal calls, with humpback whales as a test species. The novel calls made by splitting vocalisations into micro-phonemes (distinct units of sound lasting a hundredth of a second) and using a language model to speak something whale-like can then be played back to the animals to see how they respond. If the AI can identify what makes a random change versus a semantically meaningful one, it brings us closer to meaningful communication, explains Raskin. It is having the AI speak the language, even though we dont know what it means yet.

A further project aims to develop an algorithm that ascertains how many call types a species has at its command by applying self-supervised machine learning, which does not require any labelling of data by human experts to learn patterns. In an early test case, it will mine audio recordings made by a team led by Christian Rutz, a professor of biology at the University of St Andrews, to produce an inventory of the vocal repertoire of the Hawaiian crow a species that, Rutz discovered, has the ability to make and use tools for foraging and is believed to have a significantly more complex set of vocalisations than other crow species.

Rutz is particularly excited about the projects conservation value. The Hawaiian crow is critically endangered and only exists in captivity, where it is being bred for reintroduction to the wild. It is hoped that, by taking recordings made at different times, it will be possible to track whether the speciess call repertoire is being eroded in captivity specific alarm calls may have been lost, for example which could have consequences for its reintroduction; that loss might be addressed with intervention. It could produce a step change in our ability to help these birds come back from the brink, says Rutz, adding that detecting and classifying the calls manually would be labour intensive and error prone.

Meanwhile, another project seeks to understand automatically the functional meanings of vocalisations. It is being pursued with the laboratory of Ari Friedlaender, a professor of ocean sciences at the University of California, Santa Cruz. The lab studies how wild marine mammals, which are difficult to observe directly, behave underwater and runs one of the worlds largest tagging programmes. Small electronic biologging devices attached to the animals capture their location, type of motion and even what they see (the devices can incorporate video cameras). The lab also has data from strategically placed sound recorders in the ocean.

ESP aims to first apply self-supervised machine learning to the tag data to automatically gauge what an animal is doing (for example whether it is feeding, resting, travelling or socialising) and then add the audio data to see whether functional meaning can be given to calls tied to that behaviour. (Playback experiments could then be used to validate any findings, along with calls that have been decoded previously.) This technique will be applied to humpback whale data initially the lab has tagged several animals in the same group so it is possible to see how signals are given and received. Friedlaender says he was hitting the ceiling in terms of what currently available tools could tease out of the data. Our hope is that the work ESP can do will provide new insights, he says.

But not everyone is as gung ho about the power of AI to achieve such grand aims. Robert Seyfarth is a professor emeritus of psychology at University of Pennsylvania who has studied social behaviour and vocal communication in primates in their natural habitat for more than 40 years. While he believes machine learning can be useful for some problems, such as identifying an animals vocal repertoire, there are other areas, including the discovery of the meaning and function of vocalisations, where he is sceptical it will add much.

The problem, he explains, is that while many animals can have sophisticated, complex societies, they have a much smaller repertoire of sounds than humans. The result is that the exact same sound can be used to mean different things in different contexts and it is only by studying the context who the individual calling is, how are they related to others, where they fall in the hierarchy, who they have interacted with that meaning can hope to be established. I just think these AI methods are insufficient, says Seyfarth. Youve got to go out there and watch the animals.

There is also doubt about the concept that the shape of animal communication will overlap in a meaningful way with human communication. Applying computer-based analyses to human language, with which we are so intimately familiar, is one thing, says Seyfarth. But it can be quite different doing it to other species. It is an exciting idea, but it is a big stretch, says Kevin Coffey, a neuroscientist at the University of Washington who co-created the DeepSqueak algorithm.

Raskin acknowledges that AI alone may not be enough to unlock communication with other species. But he refers to research that has shown many species communicate in ways more complex than humans have ever imagined. The stumbling blocks have been our ability to gather sufficient data and analyse it at scale, and our own limited perception. These are the tools that let us take off the human glasses and understand entire communication systems, he says.

See the original post:
Can artificial intelligence really help us talk to the animals? - The Guardian

Elon Musk and Silicon Valley’s Overreliance on Artificial Intelligence – The Wire

When the richest man in the world is being sued by one of the most popular social media companies, its news. But while most of the conversation about Elon Musks attempt to cancel his $44 billion contract to buy Twitter is focusing on the legal, social, and business components, we need to keep an eye on how the discussion relates to one of tech industrys most buzzy products: artificial intelligence.

The lawsuit shines a light on one of the most essential issues for the industry to tackle: What can and cant AI do, and what should and shouldnt AI do? The Twitter v Musk contretemps reveals a lot about the thinking about AI in tech and startup land and raises issues about how we understand the deployment of the technology in areas ranging from credit checks to policing.

At the core of Musks claim for why he should be allowed out of his contract with Twitter is an allegation that the platform has done a poor job of identifying and removing spam accounts. Twitter has consistently claimed in quarterly filings that less than 5% of its active accounts are spam; Musk thinks its much higher than that. From a legal standpoint, it probably doesnt really matter if Twitters spam estimate is off by a few percent, and Twitters been clear that its estimate is subjective and that others could come to different estimates with the same data. Thats presumably why Musks legal team lost in a hearing on July 19when they asked for more time to perform detailed discovery on Twitters spam-fighting efforts, suggesting that likely isnt the question on which the trial will turn.

Regardless of the legal merits, its important to scrutinise the statistical and technical thinking from Musk and his allies. Musks position is best summarised in his filing from July 15, which states: In a May 6 meeting with Twitter executives, Musk was flabbergasted to learn just how meager Twitters process was. Namely: Human reviewers randomly sampled 100 accounts per day (less than 0.00005% of daily users) and applied unidentified standards to somehow conclude every quarter for nearly three years that fewer than 5% of Twitter users were false or spam. The filing goes on to express the flabbergastedness of Musk by adding, Thats it. No automation, no AI, no machine learning.

Perhaps the most prominent endorsement of Musks argument here came from venture capitalist David Sacks,who quoted it while declaring, Twitter is toast. But theres an irony in Musks complaint here: If Twitter were using machine learning for the audit as he seems to think they should, and only labeling spam that was similar to old spam, it would actually produce a lower, less-accurate estimate than it has now.

There are three components to Musks assertion that deserve examination: his basic statistical claim about what a representative sample looks like, his claim that the spam-level auditing process should automated or use AI or machine learning, and an implicit claim about what AI can actually do.

On the statistical question, this is something any professional anywhere near the machine learning space should be able to answer (so can many high school students). Twitter uses a daily sampling of accounts to scrutinise a total of 9,000 accounts per quarter (averaging about 100 per calendar day) to arrive at its under-5% spam estimate. Though that sample of 9,000 users per quarter is, as Musk notes, a very small portion of the 229 million active users the company reported in early 2022, a statistics professor (or student) would tell you that thats very much not the point. Statistical significance isnt determined by what percentage of the population is sampled but simply by the actual size of the sample in question. As Facebook whistleblower Sophie Zhang put it, you can make the comparison to soup: It doesnt matter if you have a small or giant pot of soup, if its evenly mixed you just need a spoonful to taste-test.

The whole point of statistical sampling is that you can learn most of what you need to know about the variety of a larger population by studying a much-smaller but decently sized portion of it. Whether the person drawing the sample is a scientist studying bacteria, or a factory quality inspector checking canned vegetables, or a pollster asking about political preferences, the question isnt what percentage of the overall whole am I checking, but rather how much should I expect my sample to look like the overall population for the characteristics Im studying? If you had to crack open a large percentage of your cans of tomatoes to check for their quality, youd have a hard time making a profit, so you want to check the fewest possible to get within a reasonable range of confidence in your findings.

Also read: Why Understanding This 60S Sci-Fi Novel Is Key to Understanding Elon Musk

While this thinking does go against the grain of certain impulses (theres a reason why many people make this mistake), there is also a way to make this approach to sampling more intuitive. Think of the goal in setting sample size as getting a reasonable answer to the question, If I draw another sample of the same size, how different would I expect it to be? A classic approach to explaining this problem is to imagine youve bought a great mass of marbles, that are supposed to come in a specific ratio: 95% purple marbles and 5% yellow marbles. You want to do a quality inspection to ensure the delivery is good, so you load them into one of those bingo game hoppers, turn the crank, and start counting the marbles you draw, in each color. Lets say your first sample of 20 marbles has 19 purple and one yellow; should you be confident that you got the right mix from your vendor? You can probably intuitively understand that the next 20 random marbles you draw could end up being very different, with zero yellows or seven. But what if you draw 1,000 marbles, around the same as the typical political poll? What if you draw 9,000 marbles? The more marbles you draw, the more youd expect the next drawing to look similar, because its harder to hide random fluctuations in larger samples.

There are onlinecalculators that can let you run the numbers yourself. If you only draw 20 marbles and get one yellow, you can have 95% confidence that the yellows would be between 0.13% and 24.9% of the total not very exact. If you draw 1,000 marbles and get 50 yellows, you can have 95% confidence that yellows would be between 3.7% and 6.5% of the total; closer, but perhaps not something youd sign your name to in a quarterly filing. At 9,000 marbles with 450 yellow, you can have 95% confidence the yellows are between 4.56% and 5.47%; youre now accurate to within a range of less than half a percent, and at that point Twitters lawyers presumably told them theyd done enough for their public disclosure.

Printed Twitter logos are seen in this picture illustration taken April 28, 2022. Photo: Reuters/Dado Ruvic/Illustration/File Photo

This reality that statistical sampling works to tell us about large populations based on much-smaller samples underpins every area where statistics is used, from checking the quality of the concrete used to make the building youre currently sitting in, to ensuring the reliable flow of internet traffic to the screen youre reading this on.

Its also what drives all current approaches to artificial intelligence today. Specialists in the field almost never use the term artificial intelligence to describe their work, preferring to use machine learning. But another common way to describe the entire field as it currently stands is applied statistics. Machine learning today isnt really computers thinking in anything like what we assume humans do (to the degree we even understand how humans think, which isnt a great degree); its mostly pattern-matching and -identification, based on statistical optimisation. If you feed a convolutional neural network thousands of images of dogs and cats and then ask the resulting model to determine if the next image is of a dog or a cat, itll probably do a good job, but you cant ask it to explain what makes a cat different from a dog on any broader level; its just recognising the patterns in pictures, using a layering of statistical formulas.

Stack up statistical formulas in specific ways, and you can build a machine learning algorithm that, fed enough pictures, will gradually build up a statistical representation of edges, shapes, and larger forms until it recognises a cat, based on the similarity to thousands of other images of cats it was fed. Theres also a way in which statistical sampling plays a role: You dont need pictures of all the dogs and cats, just enough to get a representative sample, and then your algorithm can infer what it needs to about all the other pictures of dogs and cats in the world. And the same goes for every other machine learning effort, whether its an attempt to predict someones salary using everything else you know about them, with a boosted random forests algorithm, or to break down a list of customers into distinct groups, in a clustering algorithm like a support vector machine.

You dont absolutely have to understand statistics as well as a student whos recently taken a class in order to understand machine learning, but it helps. Which is why the statistical illiteracy paraded by Musk and his acolytes here is at least somewhat surprising.

But more important, in order to have any basis for overseeing the creation of a machine-learning product, or to have a rationale for investing in a machine-learning company, its hard to see how one could be successful without a decent grounding in the rudiments of machine learning, and where and how it is best applied to solve a problem. And yet, team Musk here is suggesting they do lack that knowledge.

Once you understand that all machine learning today is essentially pattern-matching, it becomes clear why you wouldnt rely on it to conduct an audit such as the one Twitter performs to check for the proportion of spam accounts. Theyre hand-validating so that they ensure its high-quality data, explained security professional Leigh Honeywell, whos been a leader at firms like Slack and Heroku, in an interview. She added, any data you pull from your machine learning efforts will by necessity be not as validated as those efforts. If you only rely on patterns of spam youve already identified in the past and already engineered into your spam-detection tools, in order to find out how much spam there is on your platform, youll only recognise old spam patterns, and fail to uncover new ones.

Also read: India Versus Twitter Versus Elon Musk Versus Society

Where Twitter should be using automation and machine learning to identify and remove spam is outside of this audit function, which the company seems to do. It wouldnt otherwise be possible tosuspend half a million accountsevery day and lock millions of accounts each week, as CEO Parag Agrawal claims. In conversations Ive had with cybersecurity workers in the field, its quite clear that large amounts of automation is used at Twitter (though machine learning specifically is actually relatively rare in the field because the results often arent as good as other methods, marketing claims by allegedly AI-based security firms to the contrary).

At least in public claims related to this lawsuit, prominent Silicon Valley figures are suggesting they have a different understanding of what machine learning can do, and when it is and isnt useful. This disconnect between how many nontechnical leaders in that world talk about AI, and what it actually is, has significant implications for how we will ultimately come to understand and use the technology.

The general disconnect between the actual work of machine learning and how its touted by many company and industry leaders is something data scientists often chalk up to marketing. Its very common to hear data scientists in conversation among themselves declare that AI is just a marketing term. Its also quite common to have companies using no machine learning at all describe their work as AI to investors and customers, who rarely know the difference or even seem to care.

This is a basic reality in the world of tech. In my own experience talking with investors who make investments in AI technology, its often quite clear that they know almost nothing about these basic aspects of how machine learning works. Ive even spoken to CEOs of rather large companies that rely at their core on novel machine learning efforts to drive their product, who also clearly have no understanding of how the work actually gets done.

Not knowing or caring how machine learning works, what it can or cant do, and where its application can be problematic could lead society to significant peril. If we dont understand the way machine learning actually works most often by identifying a pattern in some dataset and applying that pattern to new data we can be led deep down a path in which machine learning wrongly claims, for example, to measure someones face for trustworthiness (when this is entirely based on surveys in which people reveal their own prejudices), or that crime can be predicted (when many hyperlocal crime numbers are highly correlated with more police officers being present in a given area, who then make more arrests there), based almost entirely on a set of biased data or wrong-headed claims.

If were going to properly manage the influence of machine learning on our society on our systems and organisations and our government we need to make sure these distinctions are clear. It starts with establishing a basic level of statistical literacy, and moves on to recognising that machine learning isnt magicand that it isnt, in any traditional sense of the word, intelligent that it works by pattern-matching to data, that the data has various biases, and that the overall project can produce many misleading and/or damaging outcomes.

Its an understanding one might have expected or at least hoped to find among some of those investing most of their life, effort, and money into machine-learning-related projects. If even people that deep arent making those efforts to sort fact from fiction, its a poor omen for the rest of us, and the regulators and other officials who might be charged with keeping them in check.

This article was originally published on Future Tense, a partnership between Slate magazine, Arizona State University, and New America.

Here is the original post:
Elon Musk and Silicon Valley's Overreliance on Artificial Intelligence - The Wire

PhD Candidate in Machine Learning and Signal Processing job with NORWEGIAN UNIVERSITY OF SCIENCE & TECHNOLOGY – NTNU | 303403 – Times Higher…

About the position

At theDepartment of Electronic Systems(IES) we have vacancy for a PhD candidate in machine learning and signal processing.

The position is associated with theCentre for Geophysical Forecasting (CGF)at NTNU, which is one of the Norwegian centres for research-driven innovation, funded by the research council of Norway and industry partners. The goal of the CGF is to become a world-leading research and innovation hub for the geophysical sciences, creating innovative new products and services in earth sensing and forecasting domains. As the global ecosystem enters a period of dramatic change, there is a strong need for accurate monitoring and forecasting of the Earth. Machine learning and signal processing play important roles here.

The PhD project will focus on applying state-of-the-art machine learning and signal processing techniques for the effective analysis of massive-size geophysical data. The models should be able to produce predictions and enable early warning systems in various geosciences applications. Special focus will be devoted on the interpretability of the model predictions.

This PhD project is further part of thePERSEUS doctoral programme: A collaboration between NTNU- Norways largest university, 11 top-level academic partners in 8 European countries, and 8 industrial partners within sectors of high societal relevance.PERSEUSwill recruit 40 PhD candidates who want to contribute to a smart, safe and sustainable future. We are looking for highly skilled PhD candidates motivated to approach societal challenges within one of the following thematic areas:

The current PhD, with its focus on machine learning and signal processing, goes particularly well with the first area in this list.

All participants in thePERSEUS networkbring unique and important qualities with them into the doctoral programme. The PERSEUS PhD candidates will have the opportunity to collaborate with researchers in the partner institutions and in other project consortia, and benefit from these collaborative research and education activities.You will work alongside other highly motivated and talented PhD candidates and researchers. You will also have access to the knowledge base, state-of-the-art research infrastructure, and impact orientation of the partners in the team.

In addition to your education and development within the thematic research area, you will gain transferable skills within project development and management, science communication, research ethics, innovation and entrepreneurial thinking, as well as basic university didactics.

You will be employed by NTNU. During your stay, you will do a 23 month international stay and a 1-2 month national stay with one of the PERSEUS partners. This will most fruitfully be achieved by having a strong contact with partners in CGF. This will allow you to extend your network within academia and industry, and to learn about your research area from an academic, innovation, and societal perspective.

The duration of the PhD employment is 36 months.

Starting gross salary is 501.200 NOK/year (equal to approx. 49.312EUR/year by the exchange rate of July 2022).

We are looking for PhD candidates from all nationalities, who want to contribute to our quest to create knowledge for a better world. PERSEUS recruits candidates according to the EUs mobility rule, i.e. applicants cannot have spent more than 12 months in Norway during the last 3 years, be within the first four years of their research careers and not yet be awarded a doctoral degree.

We believe in fair and open processes. All applications will be considered through a transparent evaluation procedure, with independent observers involved.

The position's working place is NTNU campus in Trondheim. You will report to Department Head.

We look forward to welcoming you to the CGF and the PERSEUS teams.

Duties of the position

Required selection criteria

In addition, the candidate must have:

The appointment is to be made in accordance with Regulations concerning the degrees ofPhilosophiaeDoctor (PhD)andPhilosodophiaeDoctor (PhD) in artistic researchnational guidelines for appointment as PhD, post doctor and research assistant

Preferred selection criteria

Personal characteristics

We offer

Salary and conditions

PhD candidates are remunerated in code 1017 and are normally remunerated at NOK 501 200 per annum before tax, however it may be negotiable (increased) depending on high level of qualifications and research experience of the candidate. From the salary, 2% is deducted as a contribution to the Norwegian Public Service Pension Fund.

The period of employment is 3 years.

Appointment to a PhD position requires that you are admitted thePhD program in Electronic Systemswithin three months of employment, and that you participate in an organized PhD programme during the employment period.

The engagement is to be made in accordance with the regulations in force concerningState Employees and Civil Servants, and the acts relating to Control of the Export of Strategic Goods, Services and Technology. Candidates who by assessment of the application and attachment are seen to conflict with the criteria in the latter law will be prohibited from recruitment to NTNU. After the appointment you must assume that there may be changes in the area of work.

It is a prerequisite you can be present at and accessible to the institution daily.

About the application

The application and supporting documentation to be used as the basis for the assessment must be in English.

Publications and other scientific work must follow the application. Please note that applications are only evaluated based on the information available on the application deadline. You should ensure that your application shows clearly how your skills and experience meet the criteria which are set out above.

Please submit your application electronically via Jobbnorge website. Applications submitted elsewhere/incomplete applications will not be considered. Applicants must upload the following documents within the closing date:

In the evaluation of which candidate is best qualified, emphasis will be placed on education, experience and personal suitability.

NTNU is committed to following evaluation criteria for research quality according toThe San Francisco Declaration on Research Assessment - DORA.

Working at NTNU

NTNU believes that inclusion and diversity is our strength. We want to recruit people with different competencies, educational backgrounds, life experiences and perspectives to contribute to solving our social responsibilities within education and research. We will facilitate for our employees needs.

NTNU is working actively to increase the number of women employed in scientific positions and has a number of resources topromote equality.

The city of Trondheimis a modern European city with a rich cultural scene. Trondheim is the innovation capital of Norway with a population of 200,000. The Norwegian welfare state, including healthcare, schools, kindergartens and overall equality, is probably the best of its kind in the world. Professional subsidized day-care for children is easily available. Furthermore, Trondheim offers great opportunities for education (including international schools) and possibilities to enjoy nature, culture and family life and has low crime rates and clean air quality.

As an employeeatNTNU, you must at all times adhere to the changes that the development in the subject entails and the organizational changes that are adopted.

A public list of applicants with name, age, job title and municipality of residence is prepared after the application deadline. If you want to reserve yourself from entry on the public applicant list, this must be justified. Assessment will be made in accordance withcurrent legislation. You will be notified if the reservation is not accepted.

If you have any questions about the position, please contact Giampiero Salvi (giampiero.salvi@ntnu.no).

Application deadline: 30.09.2022.

NTNU - knowledge for a better world

The Norwegian University of Science and Technology (NTNU) creates knowledge for a better world and solutions that can change everyday life.

Department of Electronic Systems

The digitalization of Norway is impossible withoutelectronic systems.We are Norways leading academic environment in this field, and contribute with our expertise in areas ranging from nanoelectronics, phototonics, signal processing, radio technology and acoustics to satellite technology and autonomous systems. Knowledge of electronic systems is also vital for addressing important challenges in transport, energy, the environment, and health.The Department of Electronic Systemsis one of seven departments in theFaculty of Information Technology and Electrical Engineering .

Deadline30th September 2022EmployerNTNU - Norwegian University of Science and TechnologyMunicipalityTrondheimScopeFulltimeDuration TemporaryPlace of service NTNU Campus Trondheim

The rest is here:
PhD Candidate in Machine Learning and Signal Processing job with NORWEGIAN UNIVERSITY OF SCIENCE & TECHNOLOGY - NTNU | 303403 - Times Higher...

New cryptocurrency oversight legislation arrives as industry shakes – PBS NewsHour

WASHINGTON (AP) A bipartisan group of senators on Wednesday proposed a bill to regulate cryptocurrencies, the latest attempt by Congress to formulate ideas on how to oversee a multibillion-dollar industry that has been racked bycollapsing pricesandlenders halting operations.

The regulations offered by Senate Agriculture Committee chair Debbie Stabenow and top Republican member John Boozman would authorize the Commodities Futures Trading Commission to be the default regulator for cryptocurrencies. That would be in contrast with bills proposed by other members of Congress and consumer advocates, who have suggested giving the authority to the Securities and Exchange Commission.

This year, crypto investors have seenprices plunge and companies craterwith fortunes and jobs disappearing overnight, and some firms have been accused by federal regulators of running an illegal securities exchange.Bitcoin, the largest digital asset, trades at a fraction of its all-time high, down from more than $68,000 in November 2021 to about $23,000 on Wednesday. Industry leaders have referred to this period as a crypto winter, and lawmakers have been desperate to implement stringent oversight.

The bill by Stabenow, a Democrat from Michigan, and Boozman, of Arkansas, would require all cryptocurrency platforms including traders, dealers, brokers and sites that hold crypto for customers to register with the CFTC.

READ MORE: Cryptocurrency meltdown is wake-up call for many, including Congress

The CFTC is historically an underfunded and much smaller regulator than the SEC, which has armies of investigators to look at potential wrongdoing. The bill attempts to alleviate these issues by imposing on the crypto industry user fees, which in turn would fund more robust supervision of the industry by the CFTC.

Our bill will empower the CFTC with exclusive jurisdiction over the digital commodities spot market, which will lead to more safeguards for consumers, market integrity and innovation in the digital commodities space, Boozman said in a statement.

Sens. Cory Booker, D-N.J., and John Thune, R-S.D., are co-sponsors of the bill.

Its critical that the (CFTC) has the proper tools to regulate this growing market, Thune said.

The legislation can be added to the list of proposals that have come out of Congress this year.

Sen. Pat Toomey, R-Pa., in April introduced legislation, called the Stablecoin TRUST Act, that would create a framework to regulate stablecoins, which have seen massive losses this year. Stablecoins are a type of cryptocurrency pegged to a specific value, usually the U.S. dollar, another currency or gold.

Additionally, in June, Sens. Kirsten Gillibrand, D-N.Y., and Cynthia Lummis, R-Wyo.,proposed a wide-ranging bill, called the Responsible Financial Innovation Act. That bill proposed legal definitions of digital assets and virtual currencies; would require the IRS to adopt guidance on merchant acceptance of digital assets and charitable contributions; and would make a distinction between digital assets that are commodities and those that are securities, which has not been done.

Along with the Toomey legislation and the Lummis-Gillibrand legislation, a proposal is being worked out in the House Financial Services Committee, though those negotiations have stalled.

Committee chair Maxine Waters, D-Calif., said last month that while she, top Republican member Patrick McHenry of North Carolina and Treasury SecretaryJanet Yellenhad made considerable progress toward an agreement on the legislation, we are unfortunately not there yet, and will therefore continue our negotiations over the August recess.

President Joe Bidens working group on financial markets last November issued a report calling on Congress to pass legislation that wouldregulate stablecoins, and Biden earlier this year issuedan executive ordercalling on a variety of agencies to look at ways to regulate digital assets.

Follow this link:
New cryptocurrency oversight legislation arrives as industry shakes - PBS NewsHour

TechScape: Im no longer making predictions about cryptocurrency. Heres why – The Guardian

Ive been writing about cryptocurrency for my entire career. In that time, one point Ive always stuck to is simple: dont listen to me for investment advice. Today, I want to quantify why.

Bitcoin was created in 2009, while I was in my first year at university. As an economics student and massive nerd it sat squarely at the intersection of my interests. By my final year of uni in 2011, the original cryptocurrency was experiencing its first boom and bust cycle. It rose from a low of $0.30 to a high of $32.34 that summer, before crashing back down to less than $3 when Mt. Gox, the original bitcoin exchange, was hacked. (This will become a theme.)

Sign up for our weekly technology newsletter, TechScape.

That was also the year the Guardian first covered the currency, with Ruth Whippman warning: Its critics in the political sphere fear that it could give rise to an online Wild West of gambling, prostitution and global bazaars for contraband.

I was very much on the outside looking in, though. Not being a regular drug user (cf massive nerd), the mainstream use of bitcoin getting pills or weed delivered by post from the Silk Road passed me by, so I found it more of an intellectual curiosity than anything else.

This is perhaps in part because the first thing I remember hearing about bitcoin was a tale, probably apocryphal, of someone using their gaming PC to mine the currency in their dorm room in a heatwave. The air conditioning failed, the user reported in a forum post, and heatstroke left them with mild brain damage. You can see why I was unimpressed.

By the second major boom, I was covering economics for the New Statesman. And thats where the trouble starts.

In my first published piece using the word bitcoin the first time the New Statesman had covered the topic I confidently declared: This is what a bubble looks like. At the time, bitcoin was trading at around $40 a coin.

It has never gotten that low again.

I was right that there was a bubble in the offing: the price of bitcoin had doubled in two months, and would double twice more before it popped less than a month later. But the crash, which would have been huge for any other normal asset, was a reduction of around half, taking bitcoin to the lows of three weeks prior.

A decade on, the memory of this bold claim still haunts me, and I refuse to make predictions about the future of any cryptocurrency. In fact, Ive taken to joking that the best way to make money, historically, is to do the opposite of what I say.

So I put it to the test.

The Alex Hern bitcoin investment strategy

Obviously, I dont give actual investment advice. So I reviewed every article Ive ever written that mentions bitcoin, and sorted them based on whether or not a reader would think they were good news for the crypto, or bad news. Theres an element of value judgment to this, of course: you might disagree with my decision that a story about the Winklevii launching a bitcoin price tracker in 2014 is broadly positive; or that a story about Mt. Gox reopening after a hack (another hack) is broadly negative. My hope is that the disagreements average out.

Then, I paired the stories against the price of bitcoin on the day they were written, and asked a simple question: if youd bought $10 of bitcoin every time I wrote something that seemed like bad news, and sold $10 of bitcoin every time I wrote something that seemed like good news, how would your investment have performed?

The bottom line: you would have spent a net of $420 on bitcoin, and have a crypto wallet containing around 1.1 bitcoin as a result worth, at todays market value, a little over $22,000.

Oof.

Going over the specifics, though, gives me a bit of cheer. Well over half that gain comes from a total of just seven pieces written in 2013: six negative and one positive. At the end of that run, youd have spent $50, and own 0.7 bitcoin. Those articles have an outsized influence on the over-calculation, due to how much bitcoins value has increased in the nine years since they were published.

Bitcoin had two boom and bust cycles in 2013. The first, in April, took it to a high of $266. The second, in December was bigger much bigger. The price of a coin spiked at $1,238, and fell to a low of $687. The rush of pieces I wrote about the currency when I started at the Guardian, through late 2013 and the first half of 2014, contribute much less to the bottom line, even though there were more of them.

It was also the period with the most positive stories for bitcoin. In 2014, the potential of the currency was still untapped: the idea that bitcoin or the blockchain might prove revolutionary wasnt a hackneyed promise, but something that might be just around the corner. In that boom, I wrote as many positive stories as negative.

For every article about bitcoin hitting an all-time high of $269, there was another about a 1m hack of a payment processor. For every long feature asking if bitcoin was about to change the world, there was a warning from a Dutch central banker that the hype was worse than tulip mania (and he should know).

The timing of the pieces didnt quite balance out, though, and by the end of that boom you would have turned your 0.7 bitcoin into 0.9 while cashing out as many dollars as you put in. And in that period, those bitcoin would have gone from $100 to more than $500.

From 2014 to the most recent boom, however, the money you put in would start being drowned out by the bitcoin you already own. $10 at the beginning of 2014 bought you around 0.01 bitcoin, and so 10 negative pieces from me would have sizeably increased your position.

Three years later, it would take 30 negative pieces for you to acquire the same amount of bitcoin. That meant the impact of the ICO boom the first of the great expansions of the sector from a handful of cryptocurrencies to a whole ecosystem of shitcoins was muted compared to what came before, despite stories about Iceland becoming a miners paradise and Kodak bringing out a branded cryptominer, leading to a flurry of buying and selling.

And three years after that, at the beginning of 2020, a $10 investment in the cryptocurrency would get you just 0.001 BTC. Thats good news for our theoretical investor, because 2020 marked my most positive reporting on the currency. Stories such as the US government seizing bitcoins used in the Silk Road were a sign of the growing professionalism of the sector and, for the first time, bitcoin was enough of a fixture of the tech scene that even in a comparative slump the Guardian was still covering it.

On to the latest boom and bust cycle, where finally the investor starts to lose out and I claw back some of my reputation. From its peak at $69,000 earlier this year, bitcoin has fallen by a third. Ive diligently covered the collapse, which has been by far the most brutal the sector has faced. That means the tracker has sunk almost $200 into bitcoin, and even as the overall value of the holdings has plummeted from a high of $50,000 in March to its present number.

What next?

The question going forward, of course, is whether the pattern holds up. Will you continue to make money if you buy when Im grumpy about crypto, and sell when Im optimistic? Obviously see above Im not about to make any strong predictions, but I doubt well ever again see as sharp an increase in price as we saw in the last decade, which means Ill never make a call that plays out as badly as the ones in those initial pieces from 2013.

Which is not to say I cant make other terrible calls. Remember Dejitaru Tsuka, the shitcoin that was sold with my name? I broke my rules, and warned readers: I do not think you should buy this shitcoin, nor any others. Well, if youd bought 10 worth of Tsuka when I said that, youd now have 4,000.

If you want to read the complete version of the newsletter please subscribe to receive TechScape in your inbox every Wednesday.

Go here to read the rest:
TechScape: Im no longer making predictions about cryptocurrency. Heres why - The Guardian

Should the cryptocurrency crash scare retailers? RetailWire – RetailWire

Aug 02, 2022

Nearly 75 percent of retailers plan to accept either cryptocurrency or stablecoin payments within the next two years, according to Deloittes Merchants Getting Ready For Crypto study.

The survey of 2,000 U.S. retail executives was taken in the first two weeks of December 2021, just before valuations on digital currencies collapsed.

According to Barrons, Bitcoin, the dominant token, continues to trade at around one-third of its November 2021 all-time high, with the market capitalization of the overall crypto space also tumbling.

Deloittes study, done in collaboration with PayPal, found retailers bullish on the digital assets potential:

Survey participants saw the top barriers to adoption to be security of the payment platforms, cited by 43 percent; followed by the changing regulatory landscape, 37 percent; and the instability of the digital currency market, 36 percent.

Cryptos crash has been dramatized by the meltdowns of stablecoin Terra, crypto hedge fund Three Arrows Capital and numerous crypto lending platforms, although risky assets overall, including tech stocks, have been battered inside the broader bear market.

Gucci, Balenciaga and Tag Heuer are among those this year joining Whole Foods, Nordstrom, PacSun and Crate & Barrel in accepting cryptocurrencies. American Eagle Outfitters drew attention for deciding not to accept crypto payments while recently launching an NFT apparel shop. Craig Brommers, American Eagles chief marketing officer, said at CommericeNext 2022, When we thought about our 15- to 25-year-old customer, the reality is they were not ready for cryptocurrency.

DISCUSSION QUESTIONS: Have you become any more or less confident about the value of cryptocurrencies as a form of retail payment since the start of the year? Have the barriers to adoption changed?

"Accepting cryptocurrency is a great business plan as long as you treat it as you would any foreign currency."

"At this point, cryptocurrency is not the norm in any circles outside of criminal activity."

"It will take some time for cryptocurrency to become mainstream, but there are already many early adopters in the market."

Read more here:
Should the cryptocurrency crash scare retailers? RetailWire - RetailWire

Cryptocurrency Prices And Updates: Bitcoin, Ethereum Up, Solana Most Searched Crypto Today – Outlook India

The largest cryptocurrencies, Bitcoin (BTC) and Ethereum (ETH), were trading in the green on Thursday morning, while Solana emerged as the top searched coin.

The global crypto market capitalisation went up by 1.93 per cent to $1.08 trillion as of 8.50 am. However, the global crypto volume was down by 14.45 per cent to $66.18 billion, according to Coinmarketcap data.

The trading volume in the decentralised finance coins section is about $6.03 billion or 9.11 per cent of the total crypto market 24-hour volume. The volume of all stable coins is $60.62 billion or about 91.6 per cent of the total crypto market volume in the last 24 hours.

As of 8.50 am, Bitcoin was up by 1.43 per cent at $23,139.53 and currently commands a 41.04 per cent dominance in the crypto market.

The CEO of MicroStrategy, the worlds largest corporate Bitcoin holder, Michael Saylor, said that he will be stepping down from his position and serve as executive chairman to better focus on buying Bitcoin. "I believe that splitting the roles of Chairman and CEO will enable us to better pursue our two corporate strategies of acquiring and holding Bitcoin and growing our enterprise analytics software business. Phong will be empowered as CEO to manage overall corporate operations, he was quoted as saying by Investing.com

It seems that Bitcoin holders have taken this news with mixed feelings as the price of Bitcoin rose from its low of $22,835 on August 3 to $23,560 and then started inching downwards and finally settled at around $23,100. Around 5.04 am, Bitcoin price hit an intraday low of $22,808 on August 4.

The price of Ethereum (ETH) this morning was $1,653.85 and it was up by 2.74 per cent.According to a recent report by Nansen, about $2.7 billion was spent by Ethereum holders on minting non-fungible tokens (NFT) on the Ethereum blockchain. The Nansen report also found out that 10,88,888 wallet addresses that minted NFT on Ethereum were unique.

Regarding price analysis, Ethereum made no significant moves. Ethereum was $1609 on August 3, 8.50AM and touched a high of $1677 on 5.54PM and then dropped to a low point of $1608 on August 4, 4.05AM. Its trading volume was down by 17.82 per cent at $16,515,110,881.

Solana (SOL) seems to be todays most searched coin on Coinmarketcap as of this morning. The price of Solana was up by 0.6 per cent at $39.31.

Just 4 hours ago it was reported that nearly 8000 Solana wallet addresses created on third party wallet app Slope were drained. Solana researchers clarified that although thousands of wallets were drained, the exploit was confined to one Solana wallet and they are actively investigating the matter, reported Cryptobriefing.

Cardano (ADA) rose by 1.27 per cent at $0.5069. The 24-hour trading volume for ADA was down by 24.36 per cent at $513,564,797.

Binance (BNB) was up by 6.84 per cent at $302.11. Its 24-hour trading volume gained 20.83 per cent at $2,177,333,667.

Dogecoin (DOGE) was up by 0.85 per cent at $0.06703. Its 24-hour trading volume was down by 19.82 per cent at $265,045,974.Shiba Inu (SHIB) recently launched its NFT gaming service, giving its holders one more use case for the token. Shiba Inu (SHIB) was up by 1.37 per cent at $0.00001193.

Yearn.Finance (YFI) was up by 2.08 per cent at $10,889.01. Its 24-hour trading volume was down by 34 per cent at $99,234,994.

Avalanche (AVAX) rose 3.47 per cent at $23.69 and its 24-hour trading volume fell 4.7 per cent at $574,540,355.

Aave (AAVE) was trading with a gain of 3.74 per cent at $97.27 and its 24-hour trading volume was down by 4.2 per cent at $213,889,120.

Continued here:
Cryptocurrency Prices And Updates: Bitcoin, Ethereum Up, Solana Most Searched Crypto Today - Outlook India