How Putin used internet censorship and fake news for six months to push the Ukraine war agenda – Sky News

Russia's failure to secure a quick victory against Ukraine forced Vladimir Putin to adapt.

Over the past six months, Russia has been fighting an information war alongside its military campaign.

How Moscow rerouted the internet

On 30 May the internet connection in occupied Kherson dropped. It returned within hours, but people could no longer access sites like Facebook, Twitter and Ukrainian news.

The internet had been rerouted to Russia. The online activity of those in Kherson was now visible to Moscow and was subject to censorship.

Internet traffic in Kherson was originally routed from network hubs elsewhere in the country and passed through Kyiv.

These connections remained in place during the first three months of the invasion before it was rerouted.

As Russia gained strength in southern Ukraine, reports emerged that it was taking over control of local internet providers in Kherson either through cooperation or by force.

Once in control, Russia could reroute the internet to Moscow via a state-owned internet provider in Crimea.

This briefly happened on 1 May, before Ukrainian officials managed to reverse it. But on 30 May, with Russia now in control of more infrastructure, it happened again. It now appears permanent.

With the people of Kherson now forced to use Russian internet if they want to go online, they are subject to Moscow's censorship.

For three months they have been unable to access Facebook, Twitter and other social media sites. Some Ukrainian news websites are also blocked.

Alp Toker, director of Netblocks, an internet monitoring company, says the rerouting has "effectively placed Ukrainian citizens under the purview and surveillance of the Russian state at the flick of a switch."

Internet operators and monitors report internet access in large areas of Kherson is censored to a similar level as experienced in Russia. Some smaller areas are experiencing even tougher censorship, with some Google services blocked.

Ukrainians in Kherson are finding ways to evade Russia's efforts to monitor and censor their online activity.

When Ivanna (not her real name) leaves her home, she deletes social media and messaging apps like Instagram and Telegram in case she is stopped by a soldier who may search her phone.

"You need to be careful," she tells Sky News, using an online messaging app.

She goes online using a VPN (virtual private network) which hides the user's location and allows them to bypass Russian censorship.

Searches for the software spiked in Kherson when internet controls tightened.

Russia has also shut down the mobile phone network in Kherson and new SIM cards are being sold for locals to use.

Ivanna told Sky News a passport is needed to buy the sim cards, prompting fears their use may be tracked.

Cautious, she paid a stranger to buy a SIM under his name.

TV and phone communications targeted

In the unoccupied parts of Ukraine, Moscow has sought to destroy the communication infrastructure - such as TV towers and communication centres.

It's a tactic Russia initially wanted to avoid as it did not want to damage resources that would be useful as an occupying force, explains William Alberque, director of strategy, technology, and arms control for the Institute for Strategic Studies.

"Russia thought they were going to win so fast [so wouldn't] destroy infrastructure as it was going to own that infrastructure," he tells Sky News.

Subscribe to the Ukraine War Diaries on Apple Podcasts, Google Podcasts, Spotify and Spreaker

But by keeping the lines open, Ukrainians were able to communicate with one another and the wider world.

Ultimately Russia moved to destroy what it was unable to quickly seize.

Examples of the attacks on communication infrastructure have been logged by the Centre for Information Resilience, which has been tracking and verifying attacks like these using open-source information.

One incident logged by the group was a communication centre in southern Ukraine.

Russia's attempt to control information has also included targeting TV towers.

Please use Chrome browser for a more accessible video player

Power cuts in Ukraine have also caused the nation's biggest broadband and mobile internet providers to lose connectivity.

Disinformation has doubled since the war began

Russia has used disinformation during the war to influence those in Ukraine, the country's allies, as well as its own population at home.

Examples of pro-Russian fake news include a clumsily faked video of the Ukrainian president telling people to surrender (known as a deepfake video) and social media posts accusing bombing victims of being actors.

Some of Russia's efforts have been effective. Moscow claimed the invasion was in part to tackle nazism in the Ukrainian government. Searches for "nazi" in both Russia and worldwide spiked in the first week of the war.

The number of disinformation sites has more than doubled since the Russian invasion in February, according to Newsguard, which provides credibility rankings for news and information sites.

In March, its researchers found 116 sites publishing Russia-Ukraine war-related disinformation. By August, that number had risen to 250.

It's not possible to show that all of those sites are run on the orders of Russia, however, Moscow has allocated a boosted pot of funds for its propaganda arm.

The independent Russian-language news site The Moscow Times reported the government had "drastically increased funding for state-run media amid the war with Ukraine".

The article cited figures provided by the Russian government. It said 17.4bn rubles (244m) had been allocated for "mass media" compared to 5.4bn rubles (76m) the year before.

It said in March, once the war was underway, some 11.9bn rubles (167m) were spent. This is more than twice as much as the combined spend of the two months before, which was 5bn rubles (70m).

The research comes as no surprise to Mr Alberque, who says Russia's disinformation campaign has been "constant".

"As they shift into war mode, [Russia] has to go to directly paying salaries and no longer hoping that people will echo its messages but paying them to send a certain number of messages per day," he told Sky News.

Looking forward, Mr Alberque believes the death of the daughter of an ally of Vladimir Putin will be a distraction for those directing Russia's disinformation efforts.

Russia has pointed the finger at Ukraine for carrying out the fatal car bombing in Moscow but Kyiv denies any involvement.

An apparent high-profile assassination in the capital has sparked a number of conspiracy theories, including claims the responsibility may lie with a Russian group looking to influence the war.

"The Russian government is going to have to try to control this narrative," Mr Alberque explains.

He adds that propaganda resources that would be focused on Ukraine may now be drawn into the fallout of the death, saying: "I think it's going to be a huge information sink for them because it's going to take up time and attention."

The Data and Forensics team is a multi-skilled unit dedicated to providing transparent journalism from Sky News. We gather, analyse and visualise data to tell data-driven stories. We combine traditional reporting skills with advanced analysis of satellite images, social media and other open source information. Through multimedia storytelling we aim to better explain the world while also showing how our journalism is done.

Why data journalism matters to Sky News

Continue reading here:

How Putin used internet censorship and fake news for six months to push the Ukraine war agenda - Sky News

The WikiLeaks Party | Transparency Platform

TRANSPARENCY. ACCOUNTABILITY. JUSTICE

The WikiLeaks Party believes that truthful, accurate, factual information is the foundation of democracy and is essential to the protection of human rights and freedoms. Where the truth is suppressed or distorted, corruption and injustice flourish.

The WikiLeaks Party insists on transparency of government information and action, so that these may be evaluated using all the available facts. With transparency comes accountability, and it is only when those in positions of power are held accountable for their actions, that all Australians have the possibility of justice.

The WikiLeaks Party is fearless in its pursuit of truth and good governance, regardless of which party is in power. In each and every aspect of government we will strive to achieve transparency, accountability and justice. This is our core platform.

Exercising Oversight

In a parliamentary democracy, Parliament has three functions

1) To represent the people (the democratic function)

2) To design, guide and implement policy (the legislative function)

3) To scrutinise and oversee government practice, honesty and efficiency (the oversight function).

However what we have witnessed in Australia, despite exceptional work by individual politicians in all the major parties, is a failure of the oversight function. There has been a gradual acceptance that once a single party or a coalition has gained the majority required to form a government, Parliament then becomes little more than an extension of that governments executive machinery: the houses of Parliament effectively become rubber stamps for its policy agendas. This problem becomes particularly pressing when a single party gains a majority in both Houses, a spectre that remains a distinct possibility after each federal election.

The WikiLeaks Party aims to restore genuine independent scrutiny into our political process.

This is why we are campaigning only for the Upper House. We want to return the Senate to its core function as a genuine Upper House offering independent scrutiny of government, protecting the interests of the people, and ensuring that light is shone upon bad practices.

Calling Time on Corruption of Purpose

Parliament is also failing Australians in its democratic function.

In a true democracy, Parliament must facilitate not obstruct our democratic obligation to dissent.

Yet we are witness to a degeneration of democracy into political party oligarchy, in which dissent is stifled and the public bureaucracy is contained and docile.

Members of Parliament fail to represent the people who elect them partly because of the party system: they are constrained by an obligation to toe their party line. Instead of voting according to conscience or according to the values they have publicly espoused to their constituents, they vote as they are instructed by their Party leadership.

Far too often politicians conceive their public role not to be scrutineers of government, but to be partisan supporters of their own party. Their sense of duty to the party and to the networks of political patronage to which they owe their nomination and their career prospects, outweighs their sense of duty to the electorate.

How rare is it to see a Member of Parliament, whether in government or in opposition, stepping out of line, or raising difficult or controversial issues? To engage in backbench rebellion spells death for the career politician, putting an end to their prospects of career advancement or ministerial appointment.

The result is a system of party oligarchy in which conspiracy and corruption of purpose flourish.

Its time for a culture shift.

Its time to give dissidents a voice in our political system.

Its time to inject some genuine, independent scrutiny into our political process.

The WikiLeaks Party in the Senate

The WikiLeaks Party will run candidates in future Australian Senate elections.

Our Senators will be genuinely independent in their scrutiny of the government and demand thorough transparency its contractual arrangements with private companies.

We will bring our core principles of transparency, accountability and justice to bear on all the major issues currently facing Australia.

WikiLeaks Party candidates are ideally suited to the work of the Senate: they are skilled in understanding complexity and they are experienced in dealing with large amounts of documents produced by bureaucracies and spotting their hidden significance and tricks.

The WikiLeaks Party will be vigilant against corruption in all its forms.

The WikiLeaks organisation was pioneering in its use of scientific journalism, reporting information with reference to publicly available primary sources. The WikiLeaks Party will promote scientific policy; decision-making based on research, evidence and clear, transparent principles.

In particular we will be fearless in the pursuit of the 21st century freedoms which are essential to the creation of any meaningful democracy. These include:

Download the WikiLeaks Party Platform statements as a print ready .pdf

Read the original here:

The WikiLeaks Party | Transparency Platform

Solve the problem of unstructured data with machine learning – VentureBeat

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Were in the midst of a data revolution. The volume of digital data created within the next five years will total twice the amount produced so far and unstructured data will define this new era of digital experiences.

Unstructured data information that doesnt follow conventional models or fit into structured database formats represents more than 80% of all new enterprise data. To prepare for this shift, companies are finding innovative ways to manage, analyze and maximize the use of data in everything from business analytics to artificial intelligence (AI). But decision-makers are also running into an age-old problem: How do you maintain and improve the quality of massive, unwieldy datasets?

With machine learning (ML), thats how. Advancements in ML technology now enable organizations to efficiently process unstructured data and improve quality assurance efforts. With a data revolution happening all around us, where does your company fall? Are you saddled with valuable, yet unmanageable datasets or are you using data to propel your business into the future?

Theres no disputing the value of accurate, timely and consistent data for modern enterprises its as vital as cloud computing and digital apps. Despite this reality, however, poor data quality still costs companies an average of $13 million annually.

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

To navigate data issues, you may apply statistical methods to measure data shapes, which enables your data teams to track variability, weed out outliers, and reel in data drift. Statistics-based controls remain valuable to judge data quality and determine how and when you should turn to datasets before making critical decisions. While effective, this statistical approach is typically reserved for structured datasets, which lend themselves to objective, quantitative measurements.

But what about data that doesnt fit neatly into Microsoft Excel or Google Sheets, including:

When these types of unstructured data are at play, its easy for incomplete or inaccurate information to slip into models. When errors go unnoticed, data issues accumulate and wreak havoc on everything from quarterly reports to forecasting projections. A simple copy and paste approach from structured data to unstructured data isnt enough and can actually make matters much worse for your business.

The common adage, garbage in, garbage out, is highly applicable in unstructured datasets. Maybe its time to trash your current data approach.

When considering solutions for unstructured data, ML should be at the top of your list. Thats because ML can analyze massive datasets and quickly find patterns among the clutter and with the right training, ML models can learn to interpret, organize and classify unstructured data types in any number of forms.

For example, an ML model can learn to recommend rules for data profiling, cleansing and standardization making efforts more efficient and precise in industries like healthcare and insurance. Likewise, ML programs can identify and classify text data by topic or sentiment in unstructured feeds, such as those on social media or within email records.

As you improve your data quality efforts through ML, keep in mind a few key dos and donts:

Your unstructured data is a treasure trove for new opportunities and insights. Yet only 18% of organizations currently take advantage of their unstructured data and data quality is one of the top factors holding more businesses back.

As unstructured data becomes more prevalent and more pertinent to everyday business decisions and operations, ML-based quality controls provide much-needed assurance that your data is relevant, accurate, and useful. And when you arent hung up on data quality, you can focus on using data to drive your business forward.

Just think about the possibilities that arise when you get your data under control or better yet, let ML take care of the work for you.

Edgar Honing is senior solutions architect at AHEAD.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Read this article:
Solve the problem of unstructured data with machine learning - VentureBeat

Machine Learning Gives Cats One More Way To Control Their Humans – Hackaday

For those who choose to let their cats live a more or less free-range life, there are usually two choices. One, you can adopt the role of servant and run for the door whenever the cat wants to get back inside from their latest bird-murdering jaunt. Or two, install a cat door and let them come and go as they please, sometimes with a present for you in their mouth. Heads you win, tails you lose.

Theres another way, though: just let the cat ask to be let back in. Thats the approach that [Tennis Smith] took with this machine-learning kitty doorbell. Its based on a Raspberry Pi 4, which lives inside the house, and a USB microphone thats outside the front door. The Pi uses Tensorflow Lite to classify the sounds it picks up outside, and when one of those sounds fits the model of a cats meow, a message is dispatched to AWS Lambda. From there a text message is sent to alert [Tennis] that the cat is ready to come back in.

Theres a ton of useful information included in the repo for this project, including step-by-step instructions for getting Amazon Web Services working on the Pi. If youre a dog person, fear not: changing from meows to barks is as simple as tweaking a single line of code. And if youd rather not be at the beck and call of a cat but still want to avoid the evidence of a prey event on your carpet, machine learning can help with that too.

[via Toms Hardware]

Originally posted here:
Machine Learning Gives Cats One More Way To Control Their Humans - Hackaday

Ray, the machine learning tech behind OpenAI, levels up to Ray 2.0 – VentureBeat

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Over the last two years, one of the most common ways for organizations to scale and run increasingly large and complex artificial intelligence (AI) workloads has been with the open-source Ray framework, used by companies from OpenAI to Shopify and Instacart.

Ray enables machine learning (ML) models to scale across hardware resources and can also be used to support MLops workflows across different ML tools. Ray 1.0 came out in September 2020 and has had a series of iterations over the last two years.

Today, the next major milestone was released, with the general availability of Ray 2.0 at the Ray Summit in San Francisco. Ray 2.0 extends the technology with the new Ray AI Runtime (AIR) that is intended to work as a runtime layer for executing ML services. Ray 2.0 also includes capabilities designed to help simplify building and managing AI workloads.

Alongside the new release, Anyscale, which is the lead commercial backer of Ray, announced a new enterprise platform for running Ray. Anyscale also announced a new $99 million round of funding co-led by existing investors Addition and Intel Capital with participation from Foundation Capital.

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Ray started as a small project at UC Berkeley and it has grown far beyond what we imagined at the outset, said Robert Nishihara, cofounder and CEO at Anyscale, during his keynote at the Ray Summit.

Its hard to understate the foundational importance and reach of Ray in the AI space today.

Nishihara went through a laundry list of big names in the IT industry that are using Ray during his keynote. Among the companies he mentioned is ecommerce platform vendor Shopify, which uses Ray to help scale its ML platform that makes use of TensorFlow and PyTorch. Grocery delivery service Instacart is another Ray user, benefitting from the technology to help train thousands of ML models. Nishihara noted that Amazon is also a Ray user across multiple types of workloads.

Ray is also a foundational element for OpenAI, which is one of the leading AI innovators, and is the group behind the GPT-3 Large Language Model and DALL-E image generation technology.

Were using Ray to train our largest models, Greg Brockman, CTO and cofounder of OpenAI, said at the Ray Summit. So, it has been very helpful for us in terms of just being able to scale up to a pretty unprecedented scale.

Brockman commented that he sees Ray as a developer-friendly tool and the fact that it is a third-party tool that OpenAI doesnt have to maintain is helpful, too.

When something goes wrong, we can complain on GitHub and get an engineer to go work on it, so it reduces some of the burden of building and maintaining infrastructure, Brockman said.

For Ray 2.0, a primary goal for Nishihara was to make it simpler for more users to be able to benefit from the technology, while providing performance optimizations that benefit users big and small.

Nishihara commented that a common pain point in AI is that organizations can get tied into a particular framework for a certain workload, but realize over time they also want to use other frameworks. For example, an organization might start out just using TensorFlow, but realize they also want to use PyTorch and HuggingFace in the same ML workload. With the Ray AI Runtime (AIR) in Ray 2.0, it will now be easier for users to unify ML workloads across multiple tools.

Model deployment is another common pain point that Ray 2.0 is looking to help solve, with the Ray Serve deployment graph capability.

Its one thing to deploy a handful of machine learning models. Its another thing entirely to deploy several hundred machine learning models, especially when those models may depend on each other and have different dependencies, Nishihara said. As part of Ray 2.0, were announcing Ray Serve deployment graphs, which solve this problem and provide a simple Python interface for scalable model composition.

Looking forward, Nishiharas goal with Ray is to help enable a broader use of AI by making it easier to develop and manage ML workloads.

Wed like to get to the point where any developer or any organization can succeed with AI and get value from AI, Nishihara said.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Read the original here:
Ray, the machine learning tech behind OpenAI, levels up to Ray 2.0 - VentureBeat

Tesla wants to take machine learning silicon to the Dojo – The Register

To quench the thirst for ever larger AI and machine learning models, Tesla has revealed a wealth of details at Hot Chips 34 on their fully custom supercomputing architecture called Dojo.

The system is essentially a massive composable supercomputer, although unlike what we see on the Top 500, it's built from an entirely custom architecture that spans the compute, networking, and input/output (I/O) silicon to instruction set architecture (ISA), power delivery, packaging, and cooling. All of it was done with the express purpose of running tailored, specific machine learning training algorithms at scale.

"Real world data processing is only feasible through machine learning techniques, be it natural-language processing, driving in streets that are made for human vision to robotics interfacing with the everyday environment," Ganesh Venkataramanan, senior director of hardware engineering at Tesla, said during his keynote speech.

However, he argued that traditional methods for scaling distributed workloads have failed to accelerate at the rate necessary to keep up with machine learning's demands. In effect, Moore's Law is not cutting it and neither are the systems available for AI/ML training at scale, namely some combination of CPU/GPU or in rarer circumstances by using speciality AI accelerators.

"Traditionally we build chips, we put them on packages, packages go on PCBs, which go into systems. Systems go into racks," said Venkataramanan. The problem is each time data moves from the chip to the package and off the package, it incurs a latency and bandwidth penalty.

So to get around the limitations, Venkataramanan and his team started over from scratch.

"Right from my interview with Elon, he asked me what can you do that is different from CPUs and GPUs for AI. I feel that the whole team is still answering that question."

Tesla's Dojo Training Tile

This led to the development of the Dojo training tile, a self-contained compute cluster occupying a half-cubic foot capable of 556 TFLOPS of FP32 performance in a 15kW liquid-cooled package.

Each tile is equipped with 11GBs of SRAM and is connected over a 9TB/s fabric using a custom transport protocol throughout the entire stack.

"This training tile represents unparalleled amounts of integration from computer to memory to power delivery, to communication, without requiring any additional switches," Venkataramanan said.

At the heart of the training tile is Tesla's D1, a 50 billion transistor die, based on TSMC's 7nm process. Tesla says each D1 is capable of 22 TFLOPS of FP32 performance at a TDP of 400W. However, Tesla notes that the chip is capable of running a wide range of floating point calculations including a few custom ones.

Tesla's Dojo D1 die

"If you compare transistors for millimeter square, this is probably the bleeding edge of anything which is out there," Venkataramanan said.

Tesla then took 25 D1s, binned them for known good dies, and then packaged them using TSMC's system-on-wafer technology to "achieve a huge amount of compute integration at very low latency and very-high bandwidth," he said.

However, the system-on-wafer design and vertically stacked architecture introduced challenges when it came to power delivery.

According to Venkataramanan, most accelerators today place power directly adjacent to the silicon. And while proven, this approach means a large area of the accelerator has to be dedicated to those components, which made it impractical for Dojo, he explained. Instead, Tesla designed their chips to deliver power directly though the bottom of the die.

"We could build an entire datacenter or an entire building out of this training tile, but the training tile is just the compute portion. We also need to feed it," Venkataramanan said.

Tesla's Dojo Interface Processor

For this, Tesla also developed the Dojo Interface Processor (DIP), which functions as a bridge between the host CPU and training processors. The DIP also serves as a source of shared high-bandwidth memory (HBM) and as a high-speed 400Gbit/sec NIC.

Each DIP features 32GB of HBM and up to five of these cards can be connected to a training tile at 900GB/s for an aggregate of 4.5TB/s to the host for a total of 160GB of HBM per tile.

Tesla's V1 configuration pairs of these tiles or 150 D1 dies in array supported four host CPUs each equipped with five DIP cards to achieve a claimed exaflop of BF16 or CFP8 performance.

Tesla's V1 Arrangement

Put together, Venkataramanan says the architecture detailed in depth here by The Next Platform enables Tesla to overcome the limitations associated with traditional accelerators from the likes of Nvidia and AMD.

"How traditional accelerators work, typically you try to fit an entire model into each accelerator. Replicate it, and then flow the data through each of them," he said. "What happens if we have bigger and bigger models? These accelerators can fall flat because they run out of memory."

This isn't a new problem, he noted. Nvidia's NV-switch for example enables memory to be pooled across large banks of GPUs. However, Venkataramanan argues this not only adds complexity, but introduces latency and compromises on bandwidth.

"We thought about this right from the get go. Our compute tiles and each of the dies were made for fitting big models," Venkataramanan said.

Such a specialized compute architecture demands a specialized software stack. However, Venkataramanan and his team recognized that programmability would either make or break Dojo.

"Ease of programmability for software counterparts is paramount when we design these systems," he said. "Researchers won't wait for your software folks to write a handwritten kernel for adapting to a new algorithm that we want to run."

To do this, Tesla ditched the idea of using kernels, and designed Dojo's architecture around compilers.

"What we did was we used PiTorch. We created an intermediate layer, which helps us parallelize to scale out hardware beneath it.Underneath everything is compiled code," he said. "This is the only way to create software stacks that are adaptable to all those future workloads."

Despite the emphasis on software flexibility, Venkataramanan notes that the platform, which is currently running in their labs, is limited to Tesla use for the time being.

"We are focused on our internal customers first," he said. "Elon has made it public that over time, we will make this available to researchers, but we don't have a time frame for that.

Read more:
Tesla wants to take machine learning silicon to the Dojo - The Register

Migraine classification by machine learning with functional near-infrared spectroscopy during the mental arithmetic task | Scientific Reports -…

The experimental process can be divided into five steps, as shown in Fig.6. First, we need to recruit subjects to take a Mental Arithmetic Task (MAT). When doing the task, the blood oxygen information could be measured by the device. Secondly, in order to obtain more valuable signal, signal filtering is required. Afterwards, do the signal segmentation according to the three stages of the task and do the feature extraction. Eventually, the features can be imported into the machine learning. The details will be left to the following subsections for more explanations.

The experimental process of this study. After the fNIRS signal is obtained, it will be filtered, segmented and extracted and imported to machine learning to get classification results, and finally confirm the credibility of the results with cross-validation.

Nowadays, more and more techniques have been investigated to explore the relationship between migraine and cerebrovascular reactivity or cerebral hemodynamics. Some studies have used positron emission tomography (PET) to scan the prefrontal cortex (PFC) and assess whether the suboccipital stimulator is effective5. Others have found that the ventromedial prefrontal cortex is more active in MOH than CM subjects through functional magnetic resonance imaging (fMRI)6. Both PET and fMRI are non-invasive imaging modalities but the former requires the application of radioactive imaging agents which lead to the concern for ionizing radiation. Although the latter does not involve radioactive agents, the use of a strong magnetic field excludes patients with an artificial pacemaker or any metal implants.

As early as 2007, there was a study using near-infrared spectroscopy (NIRS) to evaluate the difference in regional cerebral blood flow (rCBF) changes of the middle cerebral artery between migraine patients and the healthy control group during the breath-holding task7. In recent years, NIRS has gradually emerged in the pain field8,9,10,11. Moreover, NIRS has the advantages of non-invasive, non-radioactive, instant, low system cost, portability and easy operability, etc. Therefore, NIRS has an extremely high potential as a tool for investigating migraine.

The continuous-wave NIRS system used in this experiment is a self-developed instrument in our laboratory, as shown in Fig.7. Optode is the core of the system, consisting of three light detectors and two near-infrared light emitters staggered with a spacing of 3 cm. The four channels of the system cover the PFC, approximately at the positions of F7, Fp1, Fp2, and F8 in the International 1020 system, shown in Fig.8. The photodetector uses OPT101 (Texas Instruments Inc), which has the advantages of small size and high sensitivity to the near-infrared light band. The multi-wavelength LEDs (Epitex Inc, L4*730/4*805/4*850-40Q96-I) contain three wavelengths of 730 nm, 805 nm, and 850 nm. In this study, we use only 730 nm and 850 nm. The sampling frequency is about 17 Hz. The rear end of the device is equipped with an adjustment knob, which can make the device fit properly, and reduce the influence of external light. The power supply of the hardware uses a rechargeable 7.4 V battery, composed of two 3.7 V lithium batteries in series, and is directly connected to the microcontroller unit (MCU), an Arduino Pro Mini. The other components (including light detectors, a Bluetooth module, and a current regulator) are powered by the output pin of the MCU. The current regulator uses TLC5916 (Texas Instruments Inc), which can provide a constant current for the LEDs in the circuit. The MCU converts the original light intensity signal into the hemoglobin value and sends these data back to the computer through Bluetooth for storage. Finally, the computer displays the hemoglobin value in real time.

The wearable functional near-infrared spectroscopy system. (a) OPT101 (b) LED (c) Power source (d) MCU (e) Bluetooth module (f) Regulator knob.

The Schematic positions of fNIRS optodes in the international 1020 system.

MAT is a common and effective stress task. Research has confirmed that the MAT can produce mental stress in healthy subjects13,14 or migraine subjects15. Subjects were arranged in a quiet space to avoid interference from the outside world, informed of the process, and given a short practice opportunity to eliminate the experimental deviation due to unfamiliarity with operation. The MAT architecture was divided into three stages (Rest, Task, and Recovery) with a total duration of 555 s16, which shows in Fig.9. At the rest stage, subjects were asked to close their eyes and relax in the seat for 1 minute. At the task stage, subjects were asked to watch the questions and answer through a touch screen. At the recovery stage, subjects had to do the same things as the rest stage for 3 minutes. The computer saved the data in the form of comma-separated values after the completion of the MAT.

The MAT architecture. (a) A two-/three-digit addition/subtraction question will be displayed at the center of the screen for 1 second. (b) A countdown circle will be displayed on the screen for 4 seconds to remind the subject the remaining time to think. (c) The screen will be divided into two areas to display an answer separately. Subjects had 1 second to select the correct answer. (d) The screen shows a feedback for the result for 1 second. If the answer was correct, a green circle would be displayed; if the answer was wrong, a red cross would be displayed; if the correct answer was not selected in time, a white question mark would be displayed. Performing (ad) once is a cycle, and the task stage includes 45 cycles.

Recruitment was started only after the approval of the Institutional Review Board (IRB) of the Taipei Veterans General Hospital (No.: 2017-01-010C). All methods in this research were performed in accordance with the relevant guidelines and regulations. The inclusion criteria are subjects from 20 to 60 years old, meeting the diagnostic criteria of the third edition of the International Headache Classification (ICHD-3), and those can fully record the migraine attack pattern and basic personal data. Exclusion criteria are those with any major mental or neurological diseases (including brain damage, brain tumors), smoking habits or alcohol abuse. HC include 13 medical staff of Taipei Veterans General Hospital with an average age of 44.9 8.7 years old. Both CM and MOH are patients in the Neurology Clinic of Taipei Veterans General Hospital. There are 9 and 12 patients with an average age of 34.8 10.9 years old and 45.8 11.2 years old respectively. Informed consent was obtained from all subjects.

The signal of fNIRS can be divided into three aspects: (i) source (intracerebral vs. extracerebral), (ii) stimulus/task relation (evoked vs. non-evoked), and (iii) cause (neuronal vs. systemic)17. In our study, task-evoked neurovascular coupling and spontaneous neurovascular coupling is of primary interest. In order to obtain different types of fNIRS signals for subsequent feature extraction, two different filters were used in parallel in this procedure. The first was the Low-pass filter, a fourth-order Butterworth filter, with a cutoff frequency of 0.1 Hz18, which could filter out systemic noise such as breathing, heartbeat, and Mayer wave, which was 1 Hz, 0.3 Hz, 0.1 Hz respectively. Then the changes of neurovascular coupling signal caused by the entire MAT can be obtained. The second was a band-pass filter with a frequency band of 0.01 Hz0.3 Hz19. The hemodynamics response of the PFC, the signal changes after every stimulation, could be observed.

As the purpose of MAT was to stimulate the PFC, the corresponding two channels, Ch2 and Ch3, were focused on. The collected signals included oxygenated hemoglobin (HbO) and deoxygenated hemoglobin (HHb). In addition, two different signals could be obtained by adding or subtracting these two signals, total hemoglobin (HbT) and brain oxygen exchange (COE) respectively. These data were divided into three parts by different stages of MAT (rest, task, recovery).

Feature extraction is a method of sorting out available features from a large range of data. Proper feature extraction will improve the quality of model training. The features used in the experiment, demonstrating in Fig.10, will be introduced one by one below

Low-pass filter

Stage mean difference The average difference of hemoglobin at each stage. In order to observe the average change of fNIRS signal of the subject at different stages.

Transition slope Referring to the article published by Coyle et al.22 in 2004, which is mentioned that the maximum value of light intensity can be detected by fNIRS at about five to eight seconds after stimulation, so we took the maximum value of eight seconds. The slope of the fNIRS signal when the first eight seconds after entering a new stage . Fitting the value of the interval with a linear formula, and the coefficient of the first term is the slope. In order to observe the changes of the fNIRS signal under different stimulation.

Transition slope difference The difference of transition slope. In order to observe the difference in the changes of the fNIRS signal under different stimulation.

Normalization Normalization is a procedure for moving and rescaling data. Feature 1 (sim) 3 were calculated again after this process. The normalized data fall between zero and one, which could compare the differences in the ratio of the characteristics of fNIRS signal among the subjects to the changes in their own signal amplitude.

Band-pass filter

Stage standard deviation The standard deviation of the fNIRS signal at each stage. In order to observe the dispersion level of data.

Stage skewness The skewness of fNIRS signal at each stage. In order to observe the asymmetry of the distribution of the signal value.

Stage kurtosis The kurtosis of the fNIRS signal at each stage, which described the tail length of the distribution of the signal value23. Compared with the value near the average, outliers had a greater impact on the value of kurtosis.

Combining the above-mentioned features, a total of 144 features were obtained. These features were the inputs of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA).

Logistic regression is a model commonly used for classification, but it has some disadvantages. First, logistic regression can only deal with the problem of two classifications, and it will be tricky when it encounters multiple classifications; second, it cannot handle well when faced with a large number of features or variables. The most important thing is that if the amount of data is too small, the results will be unstable due to a lack of basis for optimizing parameters. LDA can offset this disadvantage, especially multi-group performance. LDA has two basic hypotheses. First, the algorithm assumes that each group of data is Gaussian distribution. Second, in order to make the decision boundary have a clear geometric meaning, the covariance matrix of each group of data must be equal. On the other hand, QDA does not have the limitation of covariance matrix. In addition, the credibility of the model was evaluated by leave-one-out cross-validation (LOOCV), which was often used in the small data set and made the performance of the fNIRS diagnostic ability more confident.

Excerpt from:
Migraine classification by machine learning with functional near-infrared spectroscopy during the mental arithmetic task | Scientific Reports -...

The latest in applications of machine learning and artificial intelligence, in one place – EurekAlert

image:Photo of "Handbook on Computer Learning and Intelligence (in 2 Volumes)" view more

Credit: World Scientific

A new two-volume publication edited by Prof Plamen Angelov, IEEE Fellow and Director of Research at the School of Computing and Communications and Director of Lancaster Intelligent, Robotic and Autonomous systems (LIRA) Research Centre, both at Lancaster University, UK has been published.

Containing 26 chapters, the Handbook on Computer Learning and Intelligence (in 2 Volumes) explores different aspects of Explainable AI, Supervised Learning, Deep Learning, Intelligent Control and Evolutionary Computation providing a unique range of aspects of the Computer (or Machine) Learning and Intelligence. This is complemented by a set of applications to practical problems.

This handbook is a one-stop-shop compendium covering a wide range of aspects of Computer (or Machine Learning) and Intelligence. The chapters detail the theory, methodology and applications of computer (machine) learning and intelligence, and are authored by some of the leading experts in the respective areas. It is a must have reading and tool for early career researchers, graduate students and specialists alike.

The Handbook on Computer Learning and Intelligence (in 2 Volumes) retails for US$298 / 240 (hardcover set) and is also available in electronic formats. To order or know more about the book, visit http://www.worldscientific.com/worldscibooks/10.1142/12498.

###

About the Editor

Plamen Angelov holds a Personal Chair in Intelligent Systems and is Director of Research at the School of Computing and Communications at Lancaster University, UK. He obtained his PhD in 1993 and DSc (Doctor of Sciences) degree in 2015 when he also become Fellow of IEEE. Prof. Angelov is the founding Director of the Lancaster Intelligent, Robotic and Autonomous systems (LIRA) Research Centre, which brings together over 70 researchers across fifteen different departments of Lancaster University. Prof. Angelov is a Fellow of the European Laboratory for Learning and Intelligent Systems (ELLIS) and of the Institution of Engineering and Technology (IET) as well as a Governor-at-large of the International Neural Networks Society (INNS) for a third consecutive three-year term following two consecutive terms holding the elected role of Vice President. In the last decade, Prof. Angelov founded two research groups (the Intelligent Systems Research group in 2010 and the Data Science group at 2014) and was a founding member of the Data Science Institute and of the CyberSecurity Academic Centre of Excellence at Lancaster.

About World Scientific Publishing Co.

World Scientific Publishing is a leading international independent publisher of books and journals for the scholarly, research and professional communities. World Scientific collaborates with prestigious organisations like the Nobel Foundation and US National Academies Press to bring high quality academic and professional content to researchers and academics worldwide. The company publishes about 600 books and over 140 journals in various fields annually. To find out more about World Scientific, please visit http://www.worldscientific.com.

For more information, contact WSPC Communications at communications@wspc.com.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

View post:
The latest in applications of machine learning and artificial intelligence, in one place - EurekAlert

HMUSA 2022 Conference: Using Machine Learning to Understand Production – From Tools to Finished Parts – Today’s Medical Developments

About the presentationModern data science techniques are used across many industries, but the manufacturing industry has been slow in adopting advanced analytics to optimize and accelerate operations. There are a lot of reasons for this lack of adoption, for example, a shortage of data science and analytics talent and an attitude: if its not broke, dont fix it. However, forward-thinking manufacturers are starting to embrace the power of big data and analytics to improve their operations and create a nimbler work environment. Several companies have stepped into the analytics gap, offering software and hardware solutions that provide more data and actionable insights into everything from toolpath optimization to overall production intelligence in real time. With the amount of information production by CNC machines and advanced toolpath optimization solutions, theres a big opportunity to use machine learning and advanced analytics to develop deeper insights that span from tooling to the machine, and all the way to overall factory performance. In this session, Greg McHale, CTO and co-founder of Datanomix, and Rob Caron, founder and CEO of Caron Engineering, discuss the types of data that are available from these manufacturing optimization solutions and the possibilities for driving deep insights into overall factory performance using advanced data science and machine learning.

Meet your presenterGreg McHale founded Datanomix on the premise that the 4th industrial revolution would require turnkey products that integrate seamlessly with how manufacturers work today. He brings enterprise data skills to a market ripe for innovation. McHale held engineering leadership positions at several venture-backed companies and is a graduate of Worcester Polytechnic Institute.

In 1986, after working in the CNC machining industry for many years, President Rob Caron, (P.E.), identified a gap in the market when it came to CNC tool monitoring and machine control. He decided to start his own company in the basement of his Wells, Maine-based home to pursue the development of these conceptual technologies. Today, Caron Engineering has a new 12,000ft2facility with more than 35 employees and a dynamic product line of smart manufacturing solutions that are unmatched in the industry.

About the companiesFounded in 2016 in New Hampshire, Datanomix offers automated production intelligence for precision manufacturers. When we started Datanomix, we met with dozens of manufacturers who were trying to use data from their equipment to optimize operations. Not one company was getting what they wanted out of their existing monitoring systems ? information was either too complicated and cumbersome, or too simple and not insightful. The user interfaces on those systems made it look like those companies didnt understand manufacturers. Based on their input, we built a system designed with a few key principles:?The system would require no human input for the data to be useful ? The information provided by the system would be actionable right now ? The system should be a member of your team, capable of providing answers and insights in the way you think about your business

At Caron Engineering, our mission is to transcend the industry standard by developing advanced sensor and monitoring technology to optimize performance, productivity, and profitability. As an employee-owned entity (ESOP), we work together to bring the best possible service and quality to our customers. Its our goal to provide leading smart manufacturing solutions to reduce cycle times, promote unattended operation, drive down tooling costs, and minimize expensive damage to machines and work-holding. Our people are in the business of developing adaptive solutions for the future of manufacturing through strong leadership, foresight, and diligence. At Caron Engineering, innovation is what drives us.

View original post here:
HMUSA 2022 Conference: Using Machine Learning to Understand Production - From Tools to Finished Parts - Today's Medical Developments

Autonomous Experiments in the Age of Computing, Machine Learning and Automation: Progress and Challenges – Argonne National Laboratory

Abstract:Machine learning has by now become a widely used tool within materials science, spawning entirely new fields such as materials informatics that seek to accelerate the discovery and optimization of material systems through both experiments and computational studies. Similarly, the increasing use of robotic systems has led to the emergence of autonomous systems ranging from chemical synthesis to personal vehicles, which has spurred the scientific community to investigate these directions for their own tasks. This begs the question, when will mainstay scientific synthesis and characterization tools, such as electron and scanning probe microscopes, start to perform experiment autonomously?

In this talk, I will discuss the history of how machine learning, automation and availability of compute has led to nascent autonomous microscopy platforms at the Center for Nanophase Materials Sciences. I will illustrate the challenges to making autonomous experiments happen, as well as the necessity for data, computation, and abstractions to fully realize the potential these systems can offer for scientific discovery. I will then focus on our work on reinforcement learning as a tool that can be leveraged to facilitate autonomous decision making to optimize material characterization (and material properties) on the fly, on a scanning probe microscope. Finally, some workflow and data infrastructure issues will also be discussed. This research was conducted at and supported by the Center for Nanophase Materials Sciences, a US DOE Office of Science User Facility.

Original post:
Autonomous Experiments in the Age of Computing, Machine Learning and Automation: Progress and Challenges - Argonne National Laboratory