Daily Archives: March 25, 2021

Automating Business Operations with IoT – IoT For All

Posted: March 25, 2021 at 2:51 am

As Industry 4.0 is converting from a concept to reality, many smart devices have made their way into almost every industry vertical. They have managed to disrupt the day-to-day operations of conventional organizations. Businesses are increasingly considering these devices as a necessary investment to stay ahead of the competition and drive long-term growth. For instance, technologies like video analytics are being used to enhance bank security, enterprise security, and warehouse security.

IoT can be considered a series of interconnected devices that interact with each other to perform a previously requiring human intervention. Certain information is transmitted between the devices that allow the manual procedure to be reduced. This also implies that accuracy is improved and time taken to perform the particular task is diminished. Moreover, the amount of data generated by these devices can be harnessed to derive valuable insights for the company to do better and streamline its operations in a much more efficient manner.

Business automation has been an area of focus for leaders since it provides a well-defined workflow and superior infrastructure to focus on the firms high-value activities. The steps involved are data capturing, storage, and valuable insights using emerging technologies such as Artificial Intelligence and Machine Vision.

IoT enables organizations to function efficiently with decreased operational expenses, safety maintenance, and enterprise functioning.

There are several ways sustainable growth can be achieved in the core working areas of the company using IoT. Reduction in operational expenses can be achieved with effective inventory management and the optimization of energy consumption.

In supply chain and logistics management, IoT facilitates the integration of currently used technologies like barcode scanners and Radio Frequency (RFID) based support for managing inventory. Warehouse security can also be taken care of with the help of such emerging technologies.

Carbon footprint reduction has been a key area of concern in todays environment-conscious world. Devices such as thermostats can also be integrated with IoT infrastructure. Efficient energy management can only occur when these devices are integrated with a robust platform architecture that supports them.

In many industries such as retail and banking, safety norms have to be adhered to using e-surveillance devices. These e-surveillance devices can be converted into smart platforms with the help of IoT infrastructure and video analytics. Moreover, security guidelines have turned more stringent with the rise in thefts. These are the various use cases for various industries.

Several metrics can be studied, taking into account the video graphics data generated from recording devices which can also be retrieved later on. The metrics that can be studied from this data include footfall analytics, which can be crucial to determine staff to customer ratio in banks at a particular time and the demographics of the customers walking in the bank.

A key use case of this infrastructure is to ensure bank security at any particular instant of time. Bank vaults can have a huge advantage of tight security maintained due to these devices. With man-guarding proving to be an expensive bet for banks that are flourishing across the country, along with the unreliability in terms of security, video analytics comes to the rescue. Device management can occur remotely, and a dedicated command center detects such threats well before the occurrence of mishappenings.

For businesses that usually deal with perishable goods, such as in the cold chain management, IoT helps ensure that these food and beverage products temperature thresholds do not exceed the limit prescribed. They can help in maintaining a timely response to problems with products being unfit for consumption. In the manufacturing arena, the maintenance reporting and diagnostics can be automated, which will provide flexibility to the manufacturing process and help reduce components that did not meet the specifications, and simplify the process. Intrusions can be very well detected with the aforementioned devices help since many of the warehouses and storage facilities are located in the outskirts of cities. These are some of the common issues faced.

There are various risks associated with medical facilities such as efficient hygiene management and waste management.

Proper hygiene management ensures that patients safety and the numerous stakeholders are not compromised, and harmful diseases are not spread. Enterprise security in healthcare, including fire and line safety and intrusion detection, is of utmost importance to healthcare providers. Perimeter monitoring performed with specialized video analytics tools is crucial in securing the external and internal premises of the facility.

In order to simplify management decision-making, business intelligence dashboards are extremely useful and helpful in the accession of insightful information and historical data. Even in forensics, one of the highly regulated industries, video analytics are essential to determine the falsity of claims for insurance settlements.

The food security and cleanliness standards of restaurants can be monitored using video analytics. The footage can be easily retrieved and traced wherever necessary, with the help of cloud storage facilities. Fire and smoke can be sensed using proactive surveillance and smoke sensors specifically designed for this purpose. They can make a quick alert to the security specialist and heavy losses can be avoided.

Temperature sensors are a crucial part of smart surveillance as they prevent food damage by detecting the optimum temperature levels in the storage of the requisite inventory. Energy consumption can also be controlled with consumption monitoring and patterns generated by the insights derived from the video surveillance platforms.

Multiple outlets can be accessed to control the standardization of food quality across various geographical locations. Even in the day-to-day functioning of the company, IoT-based platforms are increasingly becoming the norm. Let us have a look at how these can be implemented to improve the functioning and everyday operations:

Buildings can be made more power-efficient, and the overall productivity can be increased manifold by using smart lighting, humidity control devices, etc. There can be a significant reduction in utility bills through the adoption of these technologies.

There is another insightful use case of driving effective BPO governance using IoT-based surveillance platforms. This includes paper detection and mobile phone detection. Through computer vision and related technologies, we can determine the usage of related objects at workplaces, which can help enforce the decorum of the organization. Even fire alarms are a necessity to be automated.

Enterprise security can be very well ensured when there is requisite innovation in place.

While choosing a reliable IoT platform provider, a number of points should be considered:

In addition, there are other crucial parameters that have to be considered:

See original here:

Automating Business Operations with IoT - IoT For All

Posted in Automation | Comments Off on Automating Business Operations with IoT – IoT For All

What Is Conscious Evolution? | HuffPost

Posted: at 2:51 am

For most of us, spiritual evolution does not occur simply as a result of one flash of insight or revelation. On the contrary, it usually requires inspired intention and consistent, diligent effort. And the way this is achieved is through the greatest gift that evolution has given us: the power of choice.

The power of conscious choice, or free agency, is unique to human beings as far as we know. You and I are highly evolved individuated selves who have been blessed with the extraordinary capacity for self-reflective awareness and the freedom to choose. In fact, these are the very faculties that make it possible for us to consciously evolve. Think about it: You, whoever you are, at least to some degree have the power to choose. How much do you really appreciate the significance of this extraordinary birthright? It is surprising how few people consider the deeper implications of possessing the freedom to choose. Just imagine -- without free agency, who would you be? Little more than a robot, unconsciously responding and reacting to conditioned egoic fears and desires, cultural triggers, biological impulses, and external stimuli, with no control over your own destiny. But while it is true that we are all profoundly influenced by many of these forces, both inner and outer, at the same time, it is equally true that we always have at least some measure of freedom to choose how we respond.

If you aspire to become an evolutionarily enlightened human being, your ability to do so depends upon accepting the simple fact that independent of external circumstances, you always have a measure of freedom to choose. That sounds like a simple statement, but it's amazing how many intelligent people will deny it. When you look honestly for yourself, however, you will see that it is true: you are always choosing. Sometimes your choices are conscious; sometimes they are unconscious. Sometimes they are inspired by the best parts of yourself; other times they are motivated by lower impulses and instincts. But the bottom line is that every time you act or react, at some level a choice is being made. And you, whoever you are, are the one who is making that choice. After all, who else could it be?

Conscious evolution is a simple concept to grasp, but not quite as simple to put into in practice. Our freedom to choose is not unlimited. We each have some measure of freedom. Not complete freedom, but a measure, and that measure is greater for some people than it is for others. But as long as there is some it's enough to begin. If there is a measure of freedom then there is freedom to choose.

What that means is that in relationship to the important choices you make, you are never completely unconscious. There is always some degree of awareness, however small, which gives you the freedom to choose. And the path of conscious evolution is about increasing that degree of awareness, increasing that measure of freedom, until you are living as the enlightened self that you consciously choose to be, rather than the unenlightened self you have unconsciously and habitually identified with your entire life.

I believe that it is possible to take responsibility for the entirety of who you are in such a profound way that you can consciously choose who you want to be. But that doesn't mean it will be easy. The human self is by nature a complex multidimensional process, and within that process are many factors that limit our freedom and obscure our awareness. There are powerful biological instincts that still drive us on a deep level to act in ways that challenge our higher rational inclinations. There are all the karmic consequences of our personal history, the emotional and psychological tendencies that have formed in response to our particular life experience. There are layers of cultural conditioning, values and assumptions about how things should be that color our perspectives without us even knowing it. And many people believe that within our psyches we also carry the unresolved stories of previous lifetimes. All these factors play a part in the complex web of motives and impulses that makes up your sense of self. All of this is you. And yet it is possible to take responsibility for all of these dimensions of who you are, through the transformative recognition that you are always the one who is choosing.

If you aspire to evolve, if you intend to become a conscious vehicle for the evolutionary impulse, you have to use the God-given powers of awareness and conscious choice to navigate between your new and higher spiritual aspirations, and all of the conditioned impulses and habits that are embedded in your self-system. You need to become so conscious that you can make choices that move you, consistently, in an evolutionary direction. And it is only through the wholehearted embrace of your power of choice that it becomes possible for you to do this. This is what I often call "enlightening the choosing faculty" -- bringing the light of consciousness, conscience and higher purpose to bear on the unique and extraordinary capacity within that can define your destiny.

Eventually, if you go far enough in your spiritual development, the self-generated momentum of your own evolutionary choices will become the driving force of your life, rather than the unconscious habits of the past. And that's when something very profound occurs. Your capacity to choose will become more and more aligned with the creative freedom of the evolutionary impulse, the energy and intelligence behind the initial choice to become. When free agency, the greatest gift of the evolved human, is liberated from unconscious and habitual patterns and becomes identified with a higher or cosmic will, the individual becomes a conscious agent of evolution.

When your power of choice aligns itself with the evolutionary impulse in this way, your own deepest, heart-felt, spiritual aspiration becomes one with the original cosmic intention to create the universe. That's what Evolutionary Enlightenment is pointing to. To the degree to which you make conscious and transcend those outdated biological, psychological, and cultural habits within yourself that are inhibiting your higher development, you become an ever-more-powerful agent for conscious evolution.

Calling all HuffPost superfans!

Sign up for membership to become a founding member and help shape HuffPost's next chapter

View original post here:
What Is Conscious Evolution? | HuffPost

Posted in Conscious Evolution | Comments Off on What Is Conscious Evolution? | HuffPost

What is Conscious Evolution? How is Consciousness Created …

Posted: at 2:51 am

I will answer these questions now.

What is Conscious Evolution?

Conscious evolution is the story of human consciousness and perception developing over a period of many thousands of years. Consciousness is being modified now in an evolutionary way. We call this metabiological evolution.

During the period from about 12,000 years ago to 5,000 years ago amore reflective and outward-focused consciousness develops, and this leads to the agricultural revolution.

What happened 5,000+ years ago

An internalrather than external trigger leads to a new self-consciousness. A new form of self-consciousness then leads to the domestication of animals and the first agricultural communities.

The focus or filter of consciousness through the ego and in the direction of the physical world begins about 5,000+ years ago.

The hunting-gathering periodwas not as barbaric as we were taught. Early hunters were not struggling to survive by clubbing beasts but typically lived abundantly at least as much by picking berries and eating plants. Their consciousness or perception penetrated other life by invitation and was rich, broad, democratic and satisfying. Only about five-thousand years ago does the kind of narrow perception we have now begin to develop.

For a person living five-thousand years ago there was no sharp separation between his own consciousness and the consciousness of other life forms. Nor was there a sense of being in conflict or competition with other creatures or tribes. Consciousness could flow out of the body, through the environment and could perceive through the eyes of others. This created great empathy and compassion for all other forms of life, human included.

Early mans mind was not like ours. The shift from the nature religions to male gods marks a relatively sudden world-wide paradigm-shift. This marks the initial development of the ego-type consciousness we currently possess.

Self-consciousness also leads to the development of egotistical behavior evident in tribal or nationalistic thinking that begins to compete or see others as potential threats to guard against or worse.

Eons ago and on a collective subconscious level, the consciousness of the race made a deliberate decision to move in a new direction by developing a new type of self-consciousness. A new, more physically oriented self would be developed.

The consciousness of that self would be increasingly cut off from the inner psychic reality from which it came and instead focused on physical reality.

The new physically oriented consciousness would develop a sense of separateness from others and nature. This has been achieved; it is our own particular type of consciousness we are so familiar and well acquainted with.

Ego consciousness is particularly well suited to manipulate matter, which was the intention. However, other problems have been created as a result of this orientation, and these problems are somewhat severe.

Thus, about 5,000 years ago we began the rather uncharacteristic period of violent behavior and separation-consciousness (self-conscious ego perception to the highest degree). We are still largely within this violent period.

GREAT GODDESS DISPLACED 3000 BC

Four Earth Network websites

Redefining Science

FOUR SCIENTIFIC EDUCATIONAL SITES WITH ONE PHILOSOPHY

A singleAMAZING philosophy!

100s of articles!

4 EN websites!

They are meant to be used together.

Click between them!

CAN & DO YOUR THOUGHTS CREATE YOUR REALITY? DO SCIENTISTS BELIEVE CONSCIOUSNESS CREATES MATTER?

Many top physicists do know that thoughts create matter and reality

It is the rest of science, media and the world who do not want to listen to what they are saying.

Max Planck, Nobel PrizeWinningfather of quantum mechanics says, I regard matter as a derivative from consciousness. The Observer, 1931.

Max Tegmark (Click button for MIT paper)

Max Tegmark of MIT, says that consciousness is a state of matter.

CONSCIOUSNESS IS A STATE OF MATTER: MIT

New history-changing science transforming our personal and collective reality at this time

An explanation of how consciousness creates matter is inherent in wave-particle duality. The concept of an alive universe, a universe entirely composed of consciousness, is supported by the greatest scientific discovery of all time.

In his 1924 Ph.D. thesis and groundbreaking contributions to quantum theory, Nobel Prize winner, Louis de Broglie postulated the wave nature of electrons and suggested that all matter has wave properties.This concept is known as waveparticle duality, and forms a central part of the theory of quantum mechanics.

These theories are tested and proven to be facts. We know that quantum mechanics is correct because the mathematical algorithms are consistently relied upon in scientific applications and to build advanced scientific devices that work amazingly well.

Where Do Thoughts Go?

The wave-aspect ofanything innature is an energyfieldthat contains huge amounts of information.

What is a thought but an energy field containing information?

Both waves of matter and thoughts are electromagnetic energy. Matter and waves are the same thing. The facts are right in front of us.

Your unlimited true nature & matter creating consciousness

Your mind is connected to everything in the universe, can bypass physical laws of cause and effect and time and space restrictions, and can permeate any seeming barrier.

Non-classical physics mind-brain science is superior to all previous models on which traditional psychology and self-help are based.

QUANTUM PHYSICS MIND-BRAIN MODEL

COLLAPSE THE WAVE FUNCTION/MANIFEST

THE HOLOMOVEMENT & UNDIVIDED WHOLENESS

QUANTUM TUNNELING NON-CLASSICAL EFFECT

The science on this page supports the primary purpose of this site to empower you to be successful in all areas in intellectual and spiritual knowledge, business, finance, health, love and relationships.

This site is personal self-transformation wisdom that is meant to be exciting and fun to learn.

For something to be true it must make sense to the heart and mind. But that does not mean you need a scientist to go to the next level and to create the reality you want to experience.

Many people are searching for a scientific finding that will validate what they feel on a deeper level. That is good, but if you are a person that insists on a dry traditional scientific explanation for everything, that requirement can restrict you. I am moving you closer to your own heart and intuitions as the arbiters of truth. A greater range and experience of love and understanding is available to you. Read more.

All of this is leading to a new paradigm, the paradigm I have lived by for almost 50 years

When we look at a tree we do not see the roots, but we know a tree has roots. The same principle holds true for you. Your consciousness has a deep inner reality.

WHERE ARE THE BEST SITES & ARTICLES?

100s of articles!

4 EN websites!

All four sites have a single cohesive philosophy!They are meant to be used together.

The rest is here:
What is Conscious Evolution? How is Consciousness Created ...

Posted in Conscious Evolution | Comments Off on What is Conscious Evolution? How is Consciousness Created …

The robots are coming for your office – The Verge

Posted: at 2:50 am

As the editor-in-chief of The Verge, I can theoretically assign whatever I want. However, there is one topic I have failed to get people at The Verge to write about for years: robotic process automation, or RPA.

Admittedly, its not that exciting, but its an increasingly important kind of workplace automation. RPA isnt robots in factories, which is often what we think of when it comes to automation. This is different: RPA is software. Software that uses other software, like Excel or an Oracle database.

On this weeks Decoder, I finally found someone who wants to talk about it with me: New York Times tech columnist Kevin Roose. His new book, Futureproof: 9 Rules for Humans in the Age of Automation, has just come out, and it features a lengthy discussion of RPA, whos using it, who it will affect, and how to think about it as you design your career.

What struck me during our conversation were the jobs that Kevin talks about as he describes the impact of automation: theyre not factory workers and truck drivers. Theyre accountants, lawyers, and even journalists. If you have the kind of job that involves sitting in front of a computer using the same software the same way every day, automation is coming for you. It wont be cool or innovative or even work all that well itll just be cheaper, faster, and less likely to complain. That might sound like a downer, but Kevins book is all about seeing that as an opportunity. Youll see what I mean.

Okay, Kevin Roose, tech columnist, author, and the only reporter who has ever agreed to talk to me about RPAs. Here we go.

This transcript has been lightly edited for clarity.

Kevin Roose, youre a tech columnist at The New York Times and you have a new book, Futureproof: 9 Rules for Humans in the Age of Automation, which is out now. Welcome to Decoder.

Thank you for having me.

Youre ostensibly here to promote your book, which is great. And I wanna talk about your book. But theres one piece of the book that I am absolutely fascinated by, which is this thing called robotic process automation. And Im gonna do my best with you on this show, today, to make that super interesting.

But before we get there, lets talk about your book for a minute. What is your book about? Because I read it, and it has a big idea and then theres literally nine rules for regular people to survive. So, tell me how the book came together.

So, the book is basically divided into two parts. And the first part is basically the diagnosis. Its sort of, what is AI and automation doing today, in the economy, in our lives, in our homes, in our communities? How is it showing up? Who is it displacing, who is at risk of losing career opportunities or, you know, other things to these machines? What do we think about the arguments that this is all gonna turn out fine, whats the evidence for that? And the second half of the book is really the sort of practical advice piece, thats the nine rules that you mentioned.

And so it was my attempt to basically say, What can we do about AI and automation? Because I think you and I have been to dozens of tech conferences, and theres always some talk about AI and automation and jobs. And some people are very optimistic, some people are very pessimistic, but at the end theres always this chart that shows how many jobs could be displaced by automation in the next 10 years. And then the talk ends.

[Laughs]

Everyone just goes to lunch, you know? And its like, Okay, but... Im sitting there like, What do I do? I am a journalist, I work in an industry that is employing automation to do parts of my job; what should I, what should anyone, do to prepare for this? So, I wanted to write that, because I didnt see that it existed anywhere.

You just said, Were journalists, its an industry that employs automation to do parts of our job. I think that gets kinda right to the heart of the matter, which is the definition of automation, right?

I think when most people think of automation, they think of robots building cars and replacing factory workers in Detroit. You are talking about something much broader than that.

Yeah. I mean, thats sort of the classic model of automation. And still, every time theres a story about automation and I hate this, and its like my personal vendetta against newspaper and magazine editors every time you see a story about automation, theres always a picture of a physical robot. And I get it. Most robots that we think of from sci-fi are physical robots. But most robots that exist in the world today, by a vast majority, are software.

And so, what youre seeing today in corporate environments, in journalism, in lots of places, is that automation is showing up as software, that does parts of the job that, frankly, I used to do. My first job in journalism was writing corporate earnings stories. And thats a job that has been largely automated by these software products now.

So an earnings story is, just to put in sort of an abstract framework, a company releases its earnings, those earnings are usually in a format, because the SEC dictates that earnings are released in a format.

You say, Okay, heres the earnings per share, here is the revenue. Heres what the consensus analyst estimates were. They either beat the earnings or didnt. You can just write a script that makes that a story, you dont really need a person in the mix because theres almost no analysis to that. Right?

Right. And thats not even a very hard form of automation. I mean, that technology existed years ago, because its very much like filling in Mad Libs. You know, its like, Put the share price here, put the estimate here, put the revenue here.

But now, what were seeing with GPT-3 and other language models that are based on machine learning, is that its not just Mad Libs anymore. These generated texts are getting much better, theyre much more convincing and compelling. Theyre much more original, theyre not just sort of repeating things that theyve picked up from other places. So I think well see a lot more AI in journalism in the coming years.

So, we cover earnings at The Verge, we do it with a very different lens than a business publication, but we pay attention to a lot of companies. We care about their earnings, we cover them. If I could hire the robot to write the first two paragraphs of an earnings story for a reporter, I think all of my reporters would be like, Great. I dont wanna do that part. I wanna get to the fun part where Tim Cook on the call said something shocking about the future of the Mac. Right? And thats the part of the story thats interesting to us, anyway.

It seems like a lot of the automation story is doing jobs that are really boring, that people dont necessarily like to do. The tension there is, Well, shouldnt we automate the jobs that people dont like to do?

Yeah, this is the argument for automation in the workplace, is that all the jobs that are automatable are repetitive and boring and people dont wanna be doing them anyway. And so thats what youll hear if you call up a CEO of a company that sells automating software, I mean, RPA (robotic process automation) software. And thats what I heard over and over writing this book. But its a little simplistic, because automation can also take away the fun parts of peoples jobs that they enjoy.

Theres a lot of examples of this through history, where a factory automates, and the owners of the factory are like, This is great for workers, they hated lugging big pieces of steel and so now well have machines do that and theyll be able to do the fun and creative parts of the job. And then they install the automation and the robots, and it turns out that the workers dont like it because that was part of the job that they enjoyed. It wasnt necessarily lugging the pieces of steel, but was the camaraderie that built around that. And the downtime between big tasks.

Ideally, it would be the case that automation only took away the bad and boring and dull parts of peoples jobs, but in practice thats not always how it works. And now, with things like RPA, were seeing automation that is designed not just to replace one task or two tasks, but is really designed to replace an entire humans workload. The RPA companies now are selling what they call digital workers.

So instead of automating earnings reports, you can automate entry-level corporate journalism. Or you can automate internal communications. There are various ways that this is appearing in the corporate world. But I think theres a gap between what the sort of utopian vision of this is, and how its actually being put into practice.

Lets talk about RPA. Im very excited. Youre the only person whos ever wanted whos ever volunteered an hour of their life to talk about RPA with me. So, RPA is robotic process automation, which is an incredible name. In my opinion, made to sound as dull as possible.

Its like ASMR, if you wanna fall asleep you could just read a story about RPA.

[Laughs] The first time anyone told me about RPA, it was a consultant at a big consulting firm, and they were like, Our fastest growing line of business is going into hospitals and insurance companies where they have an old computer system, and it is actually cheaper and easier for us to replace the workers who use the old computer system, than it is to upgrade the computer system.

So, we install scripts that automate medical billing, and are basically KVM switches, so keyboard-video-mouse switches that use an old computer, like they click on the buttons. The mouse moves around and clicks on the old computer system, and that is faster and easier to replace the people, than it is to migrate the data out of the old system into a new system. Because everyone knows how complicated and expensive that is, and this is our fastest-growing line of business.

And I thought that was just the most dystopian thing Id ever heard. But then it turns out to be this massive industry that has grown tentacles everywhere.

Yeah, its amazing. I mean, my introduction to this world was sort of the same as yours. I was talking to a consultant. I was actually in Davos. Thats not my favorite way to start a story.

[Laughs]

But well go with it. And in Davos, you know, its this big conference. I call it the Coachella of capitalism. Its like a week-long festival of rich people and heads of state. The main drag, the promenade, is all corporate-sponsored buildings and tents and, you know, corporations rent out restaurants and turn them into sort of branded hang-out zones for their people and guests during the week. And by far the biggest displays on the promenade the year that I went were from consulting companies. Consulting companies like Deloitte and Accenture and Cognizant and Infosys, and all these companies that are doing massive amounts of business in RPA, or what they sometimes refer to as digital transformation. Thats sort of a euphemism.

They were spending millions of dollars and bringing in millions of dollars. And it was like, What is going on here? Like, What are these people actually selling? And it turns out that a lot of what theyre selling is stuff thatll plug into your Oracle database, thatll allow it to talk to this other software suite that you use. The kind of human replacement that youre talking about. Its very expensive to rebuild your entire tech stack if youre an old-line Fortune 500 company. But its relatively cheap to plug in an RPA bot thatll take out, you know, three to five humans in the billing department.

One of the things in your book that you mention, you call this boring bots. And you go into the process by which, yeah, you dont show up to work one day and theres a robot sitting at your desk. As a company grows and scales, it just stops hiring some of these people. It lets their jobs get smaller and smaller, it doesnt give them pathways up.

I see that very clearly, right? Like if their entire job is pasting from one Excel database, one Excel spreadsheet to another Excel spreadsheet all day, they might themselves just write a macro to do it. Why wouldnt you as a company be like, Were just gonna automate that? But all that other stuff in an office is the stuff that youre saying is important. The social camaraderie, the culture of a company. Is that even on the table for these digital transformation companies?

Its not really what theyre incentivized to think about. I mean, these consulting firms get brought in to cut costs. And cut costs pretty rapidly. And so thats their mandate and thats what theyre doing. Some of the way that theyre doing that is by taking out humans. Theyre also streamlining processes so that maybe you can reorg some of the people who used to work in accounts payable into a different division, give them something to do. But a big piece of the sales pitch is like, you can do as much or more work with many fewer people. And I talked to one consultant in Davos, and Im sorry, this is the last time I will ever mention Davos on this podcast.

Im putting your over/under on Davos mentions at five.

[Laughs] Its like the worst name drop in the world. But I talked to one consultant and he said that executives were coming up to him and saying, How can I basically get rid of 99 percent of the people that I employ? Like the target was not, How do we automate a few jobs around the edges? How do we save some money here and here? It was like, Can we wipe out basically the entire payroll?

And Is that plausible? And how do we get there as quickly as possible?

How big is the total RPA market right now?

Its in the billions of dollars. I dont know the exact figure, but the biggest companies in this are called UiPath and Automation Anywhere and there are other companies in this space, like Blue Prism. But just UiPath alone is valued at something like $35 billion and is expected to IPO later this year. So, these are large companies that are doing many billions of dollars in revenue a year, and theyre working with most of the Fortune 500 at this point.

And the actual product they sell, is it basically software that uses other software?

A lot of it is that. A lot of it is, this bot will convert between these two file formats or itll do sort of basic-level optical character recognition so that you can scan expense reports and import that data into Excel, or something like that. So, a lot of it is pretty simple. You know, a lot of AI researchers dont even consider RPA AI, because so much of it is just like static, rule-based algorithms. But a lot of them are starting to layer on more AI and predictive capability and things like that.

So you get some that are, you know, this plugs into your Salesforce and allows it to talk to this other program that maybe is a little bit older. Some of it is converting between one currency and another. But then there are these kind of digital workers, like you can hire Im making air quotes you can hire a tax auditor, who you just install, its a robot, and theoretically that can do the work that a person whose job title was tax auditor, did before.

So lets say I run like a mid-size manufacturing company, Im already thinking about Okay, on the line, there are lots of jobs that are dangerous or difficult or super repetitive, and I can run my line 24 hours a day, if I just put a robot on there. Then Im looking at my back office and Im saying, Oh, Ive got a lot of accountants and tax lawyers, and, I dont know, invoice preparers and all these people just doing stuff. I wanna hire Automation Anywhere, to come in and replace them. What does that pitch look like from the RPA company?

Well, I went to a conference for Automation Anywhere. This was pre-pandemic when conferences were still a thing.

And, you know, there were executives on stage talking to an audience of corporate executives and telling them that they could save between 20 and 40 percent of their operating costs by automating jobs in their back and middle offices. And so that pitch, you know, some companies might save less than that, some companies might save more than that, but thats the sales pitch: You can be more productive, you can free up workers to focus on higher-value tasks. Oh, and also you can shave 20 to 40 percent off your operating budget.

And so they would come in and they would assess, okay, you use Salesforce, you use an old database, you use some other program, right? I mean, at the end of the day back office work is people sitting down in front of a Windows PC and using it. So theyre like, which of these tasks are repetitive?

Yeah. Which are repetitive? What are the steps involved? There are some stories that Ive heard of people being sort of asked to train their robot replacements.

To kind of like, walk the RPA vendor or the consultant through the steps of their jobs so that, that can then be programmed into a script. So theres a lot of that, but theres also sort of reimagining processes and like, Do you really need people in three separate offices touching this piece of paper or could it be one person and a bot? I think part of what they market as digital transformation, is just going in and asking people, What outdated stuff are you using and how could we modernize that a little bit?

One of the themes here is that maybe the entire national political and cultural conversation about automation is pointed at blue-collar work. Right? Its a deindustrialized society, we dont make a lot of things here. Blue-collar workers are hurting all over America. You are talking very much about white-collar workers in corporate America getting replaced by, I mean, lets be honest, very fancy Windows scripting programs.

Yeah, thats where the sort of excess is in the economy. I mean, if you go into a factory today, theyre very lean. Most of the jobs in factories that could be automated were automated many years ago. And especially if you go to places like China, I mean, therere factories that have very few humans at all, its mostly robots. So there isnt a lot of excess there to trim.

On the other hand, a lot of white-collar workplaces are still brimming with people in the back office who are doing these kinds of repetitive tasks. And so thats sort of the strike zone right now. If you are doing repetitive tasks in a corporate environment, in a back office somewhere, your job is not long for this world. But now theres also some more advanced AI that can do kind of more repetitive cognitive work.

One example I talk about in the book is theres a guy I met, whos making essentially production planning software. So this would be not replacing the people in the factories who are working on the assembly line, itd be replacing their bosses who tell them, Okay, this part needs to be made in this quantity, on this day, on this machine. And then, you know, Two days later were gonna switch to making this part and we need this many units, and they need to go to this part of the warehouse.

All that used to be done by supervisors. And now that work can be mostly automated too. So its not purely the kind of entry-level data clerks that are getting automated, its also their bosses in some cases.

That feels like I could map it to a pretty familiar consumer story. Youve got a factory, its got some output. Its almost like a video game, right? Youve got a factory, its got some output, you need to make X, Y, and Z parts in various quantities and you need to deliver on a certain time. And to some extent, your job is to play tower defense and just fill all the bins at the right time. Or you could just play against the computer and the computer will beat you every time. Thats what that seems like. It seems very obvious that you should just let the computer do it.

Totally. And thats the logic that a lot of executives have. And I dont even know that thats the wrong logic. Like I dont think we should be preserving jobs that can be automated just to preserve jobs. The concern, I think I, and some other folks who watch this industry have, is that this type of automation is purely substitutive.

So in the past weve had automation that carried positive consequences and negative consequences. So the factory machines put some people out of their jobs, but they created many more jobs and they lowered the cost of the factories goods and they made it more accessible to people and so people bought more of them. And it had this kind of offsetting effect where you had some workers losing their jobs, but more jobs being created elsewhere in the economy that those people could then go do.

And the concern that the economists that Ive talked to had, was that this kind of RPA, like replacing people in the back office, like its not actually that good.

Its not the good kind of automation that actually does move the economy forward. Its kind of this crappy, patchwork automation that purely takes out people and doesnt give them anything else to do. And so I think on a macroeconomic level, the problem with this kind of automation is not actually how advanced it is, its how simple it is. And if we are worried about the sort of future of the economy and jobs, we should actually want more sophisticated AI, more sophisticated automation that could actually create sort of dynamic, new jobs for these people who are displaced, to go into.

One of the things I think about a lot is, yeah, a lot of white-collar jobs are pretty boring, theyre pretty repetitive. One of my favorite TikTok paths to go down is Microsoft Excel TikTok. And theres just a lot of people who are bored at work who have come up with a lot of wild ways to use Excel and they make TikToks about it. And its great. And I highly recommend it to anyone.

But their jobs are boring. Like the reason they have fodder for their TikTok careers is because Excel is boring and theyve made it entertaining. Those jobs, apart from the social element, are sort of unfulfilling, but at the same time, those are the people who might catch mistakes, might come up with a new way of doing something, might flag a new idea. Is that cost baked into the automation puzzle?

No. And in fact, Ive heard some stories from companies that did a big RPA implementation, you know, took out a bunch of workers, and then had to start hiring people back because the machines were making mistakes and they werent catching errors and the quality suffered as a result. So I think theres a danger of overselling the benefits of this kind of automation to these companies. I think some of the firms that are doing this, its a little more snake oil than real innovation.

So yeah, I think there is a danger of kind of over-automating. But I think the problem is that executives in a lot of companies, and I would say this applies largely outside of tech, this is largely in your beverage companies, hotel chains, Fortune 500 companies that maybe are running on a little bit of outdated technology.

I think the executives at those companies have come to view labor as purely a cost center. Its like, youre optimizing your workforce the same way that you would optimize your factory production. Youre trying to do things as efficiently as possible and I dont think theres a lot of appreciation for the benefit that even someone like an Excel number cruncher could have in the organization. Or maybe if you retrain that person to do something different, they could be more productive and more valuable to the organization.

But right now its just a numbers game. Theyre trying to hit next quarters targets and if automating 500 jobs in the back office is the way to do that, then thats what theyre gonna do.

You just brought up retraining. In the book youre not so hot on retraining. You dont think it has a lot of benefits. How does that play out?

Well, the data just isnt there on retraining. I mean, this is the sort of go-to stock response when you ask politicians or corporate executives, what do we do about automation and AI displacing jobs? And theres re-skilling, theres up-skilling.

Theres telling journalists to learn to code.

Right, theres telling journalists to learn to code. [laughing]

And like, you know, you hear these heartwarming stories about coal miners who got laid off and then went to coding bootcamp and became Python engineers, and started doing front-end software development. But those are the exception rather than the rule. Theres a lot of evidence that re-skilling programs actually dont have a long-term positive impact on the people who go through them, in economic terms. And some of that is probably, you know, about the kind of humans who are participating in them.

If you are a coal miner, your skill set is maybe not well-matched to be a software engineer. Its not that theyre not smart enough to do it, its that they frankly sometimes dont want to do it. Its not rewarding in the same way that the old job was. So the long-term benefit of these re-skilling programs is still something that we dont have a lot of evidence for. And theres been some estimates that say private sector re-skilling, companies retraining their own workers, thereve been some estimates that something like only one out of every four private sector workers can be profitably retrained.

So were really talking about something that needs to happen at the federal level if its gonna happen at all. And right now theres no momentum on that from either side of the aisle in Washington, to do any kind of federal retraining program.

The politician who comes to mind, first and most clearly in this conversation is obviously, Andrew Yang, who ran in the Democratic primary. He only talked about automation, basically. Hes advocated for universal basic income because he says automation is coming for all of our jobs. Is his approach more focused on the boring bot white-collar automation? Or is it at the manufacturing level?

No. And I think this is a place where he and I disagree. I mean, I like Andrew. I think he was right on a lot, but I think, you know, when hes talking on the trail about automation, hes largely talking about blue-collar automation. He talks a lot about truck drivers and manufacturing workers and even retail workers. And Im sort of sold on this idea that those industries are actually not the issue right now; the more pressing and urgent issue is white-collar automation.

And I think something like self-driving trucks is a great example of something that I am not as worried about as he is, because absolutely there will be self-driving trucks, and absolutely some truck drivers will lose their jobs. And the same goes for self-driving cars and, you know, taxi drivers and delivery drivers. I mean, theres going to be disruption there, but those are actually like gigantic technological achievements.

They will unlock huge new industries. I mean, you can just imagine, when there are self-driving cars, there will be self-driving hotels and restaurants and gyms, and therell be all kinds of jobs popping up for people who are making and selling these cars, who are repairing them, who are programming them, who are developing the hospitality around them. Its like, theres gonna be a lot of dynamism in that industry. So while, yes, it will crush some jobs, it will also save lives because itll be safer than the human drivers and itll open up new opportunities for people. So thats an area where Im actually not as pessimistic as Andrew Yang is.

What do you think about universal basic income?

I think its a pretty good idea. I mean, what were learning now with the stimulus checks is that giving people direct cash transfers is a really good idea in times when things are perilous and you need to give people a way to stay afloat. And there are other ideas that I think are wise too. I mean right now the tax rate for labor is a lot higher than for capital and for equipment. So companies are actually financially incentivized to automate more jobs because they get taxed less on money that they spend on robots versus on employing humans. So I think equalizing those tax rates could be a way to deal with this on a policy level.

But ultimately I think we have a long way to go on any of this stuff. There arent really a lot of politicians agitating for this except for Andrew Yang. So I think my goal is not to give people perfect policy recommendations. Im assuming some sort of stasis on the government level, and Im trying to convince people that its in their interest to take this into their own hands and come up with their own plans. Because I dont think the cavalry are coming.

One of the things that I have talked about, on maybe every episode of the show is how trends have accelerated in the pandemic. And obviously were moving to remote work, were out of offices. Even maybe three years ago, I was at a Microsoft event and I saw Satya Nadella, CEO of Microsoft. And he was talking about all the things they were doing, and at the end hes like, And I just heard about this robotic process automation. It sounds amazing.

And now its like, oh, everyones doing it. Microsoft is in that business. He went from, I thought it was interesting to, If youre writing robots to use Excel, were gonna write the robots for you. That is a huge business. Thats a great business for Microsoft to be in. Googles doing it. You mentioned the other two companies that are already big. How much has the pandemic accelerated this curve?

A huge amount. I mean, I talked to a bunch of consultants who get these calls to come in and automate, you know, the call center or the finance department at big companies. And they said, there are basically two reasons why things have accelerated. One is that, I think, the pandemic has created a lot more demand for certain types of services and goods and created some supply chain issues. And so companies actually need to automate parts of their operations just to keep up with the demand.

But they also mention that theres been this kind of political cover that the pandemic gave the executives, because a lot of this technology, the RPA technology, is not new. Like this has been around. Its not sophisticated, its not mind-blowing in its complexity. But its fairly obviously displacing workers, and so a lot of executives have resisted it because, you know, it doesnt save them that much money, its not that much more productive or accurate than the humans doing those jobs, and if they implement RPA in normal times, workers get freaked out. Theres a backlash, maybe the mayor of their city calls and asks them why theyre automating jobs. Its a political headache in the instances when it happens publicly.

But during COVID theres been no real backlash to that. In fact, customers want automation because it lets them get goods and services without coming into contact with humans who might potentially be sick. So it kind of freed up executives to do the kind of RPA automation that they had been wanting to do and have been capable of doing for years. And so the consultants I talked to said, Yeah, were fielding calls from a lot of people who are saying, Yeah, lets do that automation project we talked about a couple years ago. Now is the right time.

Youre gonna come into our back office, while everyones out of the office, and figure out which accountants we dont need anymore.

Exactly, and you know, theres some precedent for this. I mean, economic disruption is often when big changes happen in the workplace. Youve already seen millions of jobs disappearing during the pandemic, and some of those jobs might not come back. It might just be that these companies are able to operate with many fewer people.

So youve called them boring bots. You say the technology is not so sophisticated. The industry calls it RPA. Like, theres a lot of pressure on making this seem not the most technologically sophisticated or exciting thing. It comes with a lot of change, but Im wondering, are there any stories of RPA going horribly wrong?

Im just imagining like, I think the most consumer-facing automation is, you call the customer support line and you go through the phone tree. It makes all the sense in the world on paper: if all I need is the balance of my credit card, I should just press 5 and a robot will read it to me, but like I just want to talk to a person every time. Because that phone tree never has the options I want or its always confused or something is wrong. There has to be a similar story in the back office where the accounting software went completely sideways and no one caught it, right?

Yeah, I mean, theres several stories like that in the book. Theres a trading firm called Knight Capital that had an algorithm go haywire and it lost millions of dollars in milliseconds. There was actually just a story in the financial markets I forget who it was, it was one of the big banks accidentally wired hundreds of millions of dollars to someone else and couldnt get it back. And so it was just like, they just lost that. Im sure that automation had some role in that, but that might have been a human error.

But there are also lower-level instances of this going haywire. One of the examples I talk about in the book is this guy Mike Fowler, who is an Australian entrepreneur who came up with a way to automate T-shirt design. So, I dont know if you remember like five or six years ago, but there were all these auto-generated T-shirts on Facebook that were advertised. So, you know, itd be like, Kiss me, Im a tech blogger who loves punk rock. You know, and those would just be like Mad Libs, you know?

Hang on, I gotta buy a T-shirt.

[Laughing] Or like, My other car is a flying bike, or whatever. You know, it was just the weirdest, most nonsensical combinations of demographic targeting IDs, like plugged into T-shirt designs and uploaded to the internet. And Mike Fowler was one of the people who was making that, and he pioneered this algorithm that would take, you know, sort of catchphrases, and plug words into them and then automatically generate the designs and list the SKUs on Amazon and make the ads for Facebook.

And so he made a lot of money doing this, and then one day it went totally wrong because he hadnt cleaned up the word bank that this algorithm drew from. So there were people noticing shirts for sale on Amazon that were saying things like Keep calm and hit her, or, Keep calm and rape a lot. Like just words that he had forgotten to clean out of the database, and so as a result, his store got taken down. He lost all his business. He had to change jobs, like it was a traumatic event for him. And thats a colorful example but there are, Im sure, lots of more mundane examples of this happening at places that have implemented RPA.

Is that cost baked in? Im imagining, you know, the mid-sized bottling firm in the Midwest and the slick top five consulting companies selling RPA, Everythings gonna be great. Then they leave. The software is going sideways. No one really knows how to use it. Like, is that all baked into the cost? Is that just, the consulting company gets to come back in and charge you more money to fix it?

I think thats how its going a lot of the time. The consulting companies end up sort of playing a kind of oversight role with the bots when they malfunction. Because there just isnt a whole lot of tech expertise in a lot of these companies, and certainly not for things like this. So, yeah, the consulting companies are making money hand over fist on this. Theres no question about it. And this has been a transformative line of business for them because its actually like, its not that hard, frankly.

Go here to see the original:

The robots are coming for your office - The Verge

Posted in Corona Virus | Comments Off on The robots are coming for your office – The Verge

Robotic fish learns to match its swimming speed to the current – New Atlas

Posted: at 2:50 am

Fish have a sensory system known as the lateral line, which allows them to detect movements, vibrations and pressure gradients in the water. Scientists have now given a robotic fish its own version of that system, letting it determine the best swimming speed.

The study involved researchers from the Max Planck Institute for Intelligent Systems (Germany), Seoul National University and Harvard University. They created a soft-bodied fish-inspired robot, which was able to swim in place against a water current passing through a tank.

Its undulating swimming motion was made possible thanks to a series of linked silicone chambers, located along either side of its body. Air was alternately pumped into the chambers on one side and out of those on the other this caused the inflated side to expand and curve outwards, while the deflated side curled inwards.

The robot's lateral line system consisted of two liquid-metal-filled silicone microchannels, running the length of each side. As each of those channels stretched while that side of the body curved, the electrical resistance of the liquid metal within increased. Therefore, by monitoring the changes in resistance, it was possible to determine how much a given amount of air pressure caused the robot's body to undulate.

The scientists proceeded to set up a self-learning loop, in which a computer connected to the robot measured the changing water current velocity, then automatically adjusted the air pressure in response to that information. Doing so allowed the robot to continuously maintain a swimming speed which matched that of the current. In a natural environment such as a river, this would keep the robot from being swept downstream when not proceeding forward.

"This robot will allow us to test and refine hypotheses regarding the neuromechanics of swimming animals as well as help us improve future underwater robots," says Max Planck's Dr. Ardian Jusufi. "In addition to characterizing the soft strain sensor under submerged dynamic conditions for the first time, we also developed a simple and flexible data-driven modelling approach in order to design our swimming feedback controller."

A paper on the research was recently published in the journal Advanced Intelligent Systems. The robot can be seen in action, in the video below.

Source: Max Planck Institute for Intelligent Systems

From a real fish to a soft robotic model - a publication in Wileys Advanced Intelligent Systems

Go here to read the rest:

Robotic fish learns to match its swimming speed to the current - New Atlas

Posted in Corona Virus | Comments Off on Robotic fish learns to match its swimming speed to the current – New Atlas

After 50 Years, Physicists Confirm The Existence of an Elusive Quasiparticle – ScienceAlert

Posted: at 2:48 am

Through painstaking work, scientists have found evidence of a quasiparticle that was first imagined as a hypothesis almost 50 years ago: the odderon.

The odderon is a combination of subatomic particles rather than a new fundamental particle but it does act like the latter in some respects, and the way it fits into the fundamental building blocks of matter makes the discovery a huge moment for physicists.

The odderon was finally revealed through a detailed analysis of two groups of data, hitting the 5 sigma chance of probability researchers use as a threshold.

"This means that if the odderon did not exist, the probability that we observe an effect like this in the data by chance would be 1 in 3.5 million," says physicist Cristian Baldenegrofrom the University of Kansas.

Particles like protons and neutrons are made up of smaller subatomic particles: put simply,quarksare 'stuck together' with the force-carrying gluons. Smacking protons together in a particle accelerator gives us an opportunity to glimpse into their gluon-laden guts.

When two protons are smashed together but somehow survive the encounter, this interaction - a type of elastic scattering - can be explained by the protons exchanging either an even or odd number of gluons.

If that number is even, it's the work of apomeron quasiparticle;the other option which seems to happen much less often is an odderon quasiparticle, a compound with an odd number of gluons.

Until now, scientists have been unable to spot odderons in experiments, even though theoretical quantum physics has suggested they should exist.

Here, researchers crunched the numbers on a vast set of data from the Large Hadron Collider (LHC) particle accelerator at CERN in Switzerland and the Tevatron particle accelerator at Fermilab in the US.

Millions of data points were studied to compare proton-proton or proton-antiproton collisions, until the scientists were convinced they'd seen results - an odd-numbered gluonic compound - that would only be possible if the odderon existed.

The comparison between the two types of collisions revealed a distinct difference in energy being exchanged - that difference is evidence of the odderon. The team then combined more precise measurements from a previous experiment in 2018 that ruled out some of the uncertainties, allowing them to reach the high certainty level of detection for the first time.

This discovery also helps fill in some of the gaps in the modern idea of quantum chromodynamics or QCD, the hypothesis of how quarks and gluons interact at the smallest level. We're talking about the state of matter at the smallest scales, and how everything in the Universe gets put together.

What's more, the specialized technology developed to help track down the odderon could have a variety of other uses in the future, the researchers say: in medical instruments, for example.

While this research doesn't answer every question about the odderon and how it functions, it's the best proof yet that it exists. Future particle accelerator experiments will be able to add further confirmation, and no doubt raise a few more questions.

"Searching for signatures of the odderon is a very different task in comparison to what is traditionally done in particle physics," Baldenegro said.

"For instance, in searching for the Higgs boson or the top quark, one looks for a 'bump' over a smooth invariant mass distribution, which is already very challenging. The odderon, on the other hand, has much more subtle signatures. This has made the hunt for the odderon so much more challenging.''

The paper has been submitted for publication in Physical Review Lettersand is available as a preprinton arXiv; connected research has been published in the European Physical Journal C.

Read more:

After 50 Years, Physicists Confirm The Existence of an Elusive Quasiparticle - ScienceAlert

Posted in Quantum Physics | Comments Off on After 50 Years, Physicists Confirm The Existence of an Elusive Quasiparticle – ScienceAlert

AI and robotics are helping optimize farms to increase productivity and crop yields – TechRepublic

Posted: at 2:48 am

One company built an autonomous vehicle to help haul crops, saving work and time. Others use drones and sensors to communicate with farmers.

Image: iStock/lamyai

Farmers have long struggled with operational optimization and labor concerns. Finding enough labor to get the job done, as well as keeping workers safe is a constant struggle.

"There is an immediate need to improve efficiency and reduce costs, especially now that the pandemic has exposed just how fragile the supply chain is," said Suma Reddy, CEO of Future Acres, an agricultural robotics and artificial intelligence company. "We saw shortages in both production and more workers being put at risk when picking specialty crops on a daily basis that have really caused the industry to take a step back and re-examine how we can create greater resiliency in the food chain."

SEE: Natural language processing: A cheat sheet (TechRepublic)

One idea is to equip farms with a combination of AI and robotics that can "think through" as well as do some of the physical work of farming.

"We introduced Carry for that purpose," Reddy said. "It's an autonomous, electric agricultural robotic harvest companion to help farmers gather hand-picked crops faster and with less physical demand."

The Future Acres Carry helps transport harvested crops using AI.

Image: Future Acres

The self-driving Carry vehicle uses a combination of AI, automation and electric power to transport up to 500 pounds of crops. Reddy estimates that Carry can increase production efficiency by up to 30%, paying for the vehicle investment in 80 days.

"Our initial launch was targeted at customers at small- to medium-sized table-grape farms in the U.S. that are larger than 100 acres," Reddy said. "Grapes were the specialty crop we focused on initially, but the specialty crop market covers more than just grapes, and we believe that Carry can improve the harvesting of those types of crops as well."

Morder Intelligence estimates that the AI market in agriculture, valued at $766.41 million in 2020, will reach $2.5 billion by 2026. This is a compound annual growth rate of 21.52% between 2021 and 2026.

SEE: Smart farming: How IoT, robotics, and AI are tackling one of the biggest problems of the century (TechRepublic)

In this market, Carry is just one example of an array of autonomous technologies in agriculture that include AI, robotics and automation. Other examples are autonomous tractors and harvesters, as well as aerial drones that map fields and identify topography, soil types and moisture content from the air to provide input for prescriptive fertilizers that AI develops in order to optimize crop yields.

"In our case, we wanted to provide a robotic harvest companion that can transport up to 500 pounds of crops on all types of terrain and in all weather conditions," Reddy said. "To do this, we use machine learning and computer vision capabilities that enable the vehicle to avoid obstacles like trees and people, and to collect and apply data to further optimize precision and efficiency."

SEE: Future of farming: AI-enabled harvest robot flexes new dexterity skills (TechRepublic)

As with any technological advancement, trial-and-error proofs of concept are needed. Farming operational habits also need to be changed in order to take advantage of new technology.

What Reddy and others in the field have learned is that trialing AI and robotics in actual use cases offers the only true test of how well the technology performs. This is a universal truth for all types of AI and roboticsnot just the ones that find themselves in a farmer's field.

As a one-time Peace Corps volunteer in Africa, Reddy wanted to "build a better bridge between how we manage our resources and build a better future." Her company and others are now transforming agriculture with the help of big data, analytics and hardware, and it can't come too soon. The United Nations estimates that in 30 years, the global population will reach 9.7 billion people, and there will be a need to provide 50% more food by 2050.

Now is the time for AI and robotics solution providers to jump in.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Read more from the original source:

AI and robotics are helping optimize farms to increase productivity and crop yields - TechRepublic

Posted in Robotics | Comments Off on AI and robotics are helping optimize farms to increase productivity and crop yields – TechRepublic

Daedalean reveals partnership with Reliable Robotics – sUAS News

Posted: at 2:48 am

On March 22, 2021, two companiesDaedalean and Reliable Roboticsannounced their new partnership to build advanced navigation and situational awareness (SA) systems for commercial aircraft operations. With the new certifiable technology, onboard or remote pilots will benefit from next-generation flight automation systems.The proprietary solution enables onboard pilots and remote pilots to make faster, better-informed decisions based on the advanced sensors provided by the system.

Reliable Robotics is a leader in aircraft automation. During the last months, they demonstrated pioneering capabilities of their systems byremotely piloting a Cessna 208 Caravan from a control centre in their headquarters over 50 miles away. In 2019, the company made aviation history operating a remotely piloted Cessna 172 Skyhawk over a populated region with no one on board and subsequently demonstrated a fully automated landing of the larger Cessna 208 in 2020 on the third day of flight testing.

Daedaleans systems can now feed this information about the aircraft position relative to the terrain with its obstacles and safe landing sites and relative to other traffic, to the Reliable Robotics flight control stack, which then has at its disposal an additional layer of safety and can use it to deal with multiple contingencies such as jammed or disabled GPS, non-cooperative traffic, or emergency landing scenarios.

The end product both companies foresee is a system that can operate in airspaces as a model citizen, that enables denser economic use of the airspace, at safety levels that are an order of magnitude above todays standards.

Reliable Robotics has the most credible system for remotely piloted operations with immediate applications for cargo operators, said Luuk van Dijk, Founder and CEO of Daedalean. Our team has developed advanced machine learning that can adapt to the inherent uncertainties in airspace and increasing levels of onboard autonomy. Bringing our core competencies together was a logical next step to jointly develop a solution set that makes aircraft safer.

Both companies have been built on the principle that certification is paramount from day one, said Robert Rose, Co-founder and CEO of Reliable Robotics. Daedalean is the recognized leader when it comes to developing machine learning systems within the required regulatory framework. This is not a domain where you build something first and then figure out how to certify it later.

More here:

Daedalean reveals partnership with Reliable Robotics - sUAS News

Posted in Robotics | Comments Off on Daedalean reveals partnership with Reliable Robotics – sUAS News

Vaarst launches to drive the future of marine robotics through data focus | RoboticsTomorrow – Robotics Tomorrow

Posted: at 2:47 am

Launching today, robotics technology player Vaarst will give offshore and marine robotics new capabilities through retrofitted artificial intelligence and autonomy.

Bristol, UK; 24 March 2021. Vaarst, a technology spin-off from leading subsea robotic and hydrographic survey company Rovco, was formally launched today with the goal of revolutionising the offshore robotics sector - leveraging intelligent data flows for smart asset management and creating an energy-efficient and more sustainable future.

Vaarst will target the energy and marine sectors through its innovative technologies, such as SubSLAM X2 - an intelligent data collection system that delivers robotic spatial awareness and live 3D point clouds to any device in the world, without costly positioning systems, thereby saving many project days. This, combined with the company's machine learning and autonomy expertise will then provide the very best in efficient data collection and AI interpretation.

The new spinout company, Vaarst, is predicting immediate 2021 revenues over 1m rising to 20m+ rapidly in the next few years.

Vaarst CEO and Founder, Brian Allen, said: "Autonomous robotics are the key to reducing the cost of offshore operations. At the same time, digitalisation of field assets is essential as the industry evolves, marrying these two concepts is needed to realise the real benefit of modern tech. It's the data that has to drive the vehicles. Vaarst is committed to unlocking the potential of offshore robotics for all.

He continues: "We're tremendously excited about the future, and really delivering our customers' digital and robotic ambitions."

Vaarst will operate globally, with headquarters in Bristol, and has 29 employees with plans to grow to 70+ by end of 2022. The company is a technology spin-off of Rovco which was founded in 2015 and has invested heavily in real-time artificial intelligence-based 3D vision and autonomy systems. Future plans will see Vaarst take its offering to the wider industrial robotics global markets in sectors such as mining, construction, farming and land survey.

Here is the original post:

Vaarst launches to drive the future of marine robotics through data focus | RoboticsTomorrow - Robotics Tomorrow

Posted in Robotics | Comments Off on Vaarst launches to drive the future of marine robotics through data focus | RoboticsTomorrow – Robotics Tomorrow

Reinforcement learning with artificial microswimmers – Science

Posted: at 2:47 am

Abstract

Artificial microswimmers that can replicate the complex behavior of active matter are often designed to mimic the self-propulsion of microscopic living organisms. However, compared with their living counterparts, artificial microswimmers have a limited ability to adapt to environmental signals or to retain a physical memory to yield optimized emergent behavior. Different from macroscopic living systems and robots, both microscopic living organisms and artificial microswimmers are subject to Brownian motion, which randomizes their position and propulsion direction. Here, we combine real-world artificial active particles with machine learning algorithms to explore their adaptive behavior in a noisy environment with reinforcement learning. We use a real-time control of self-thermophoretic active particles to demonstrate the solution of a simple standard navigation problem under the inevitable influence of Brownian motion at these length scales. We show that, with external control, collective learning is possible. Concerning the learning under noise, we find that noise decreases the learning speed, modifies the optimal behavior, and also increases the strength of the decisions made. As a consequence of time delay in the feedback loop controlling the particles, an optimum velocity, reminiscent of optimal run-and-tumble times of bacteria, is found for the system, which is conjectured to be a universal property of systems exhibiting delayed response in a noisy environment.

Living organisms adapt their behavior according to their environment to achieve a particular goal. Information about the state of the environment is sensed, processed, and encoded in biochemical processes in the organism to provide appropriate actions or properties. These learning or adaptive processes occur within the lifetime of a generation, over multiple generations, or over evolutionarily relevant time scales. They lead to specific behaviors of individuals and collectives. Swarms of fish or flocks of birds have developed collective strategies adapted to the existence of predators (1), and collective hunting may represent a more efficient foraging tactic (2). Birds learn how to use convective air flows (3). Sperm have evolved complex swimming patterns to explore chemical gradients in chemotaxis (4), and bacteria express specific shapes to follow gravity (5).

Inspired by these optimization processes, learning strategies that reduce the complexity of the physical and chemical processes in living matter to a mathematical procedure have been developed (6). Many of these learning strategies have been implemented into robotic systems (79). One particular framework is reinforcement learning (RL), in which an agent gains experience by interacting with its environment (10). The value of this experience relates to rewards (or penalties) connected to the states that the agent can occupy. The learning process then maximizes the cumulative reward for a chain of actions to obtain the so-called policy. This policy advises the agent which action to take. Recent computational studies, for example, reveal that RL can provide optimal strategies for the navigation of active particles through flows (1113), the swarming of robots (1416), the soaring of birds (3), or the development of collective motion (17). The ability of how fish can harness the vortices in the flow field of others for energy-efficient swimming has been explored (18). Strategies of how to optimally steer active particles in a potential energy landscape (19) have been explored in simulations, and deep Q-learning approaches have been suggested to navigate colloidal robots in an unknown environment (20).

Artificial microswimmers are a class of active materials that integrate the fundamental functionality of persistent directed motion, common to their biological counterparts, into a user-designed microscopic object (21). Their motility has already revealed insights into a number of fundamental processes, including collective phenomena (2224), and they are explored for drug delivery (25) and environmental purposes (26). However, the integration of energy supply, sensing, signal processing, memory, and propulsion into a micrometer-sized artificial swimmer remains a technological challenge (27). Hence, external control strategies have been applied to introduce sensing and signal processing, yet only schemes with rigid rules simulating specific behaviors have been developed (2831). Combining elements of machine learning and real-world artificial microswimmers would considerably extend the current computational studies into real-world applications for the future development of smart artificial microswimmers (32).

Here, we incorporate algorithms of RL with external control strategies into the motion of artificial microswimmers in an aqueous solution. While the learning algorithm is running on a computer, we control a real agent acting in a real world subjected to thermal fluctuations, hydrodynamic and steric interactions, and many other influences. In this way, it is possible to include real-world objects in a simulation, which will help to close the so-called reality gap, i.e., the difference of pure in silico learning and real-world machine learning even at microscopic length scales (27). Our experimental investigation thus goes beyond previous purely computational studies (3, 1113, 20). It allows us to observe the whole learning process optimizing parameters, which are not accessible in studies of biological species, to identify the most important ingredients of the real dynamics and to set up more realistic, but still simple, models based on this information. It also provides a glimpse of the challenges of RL for objects at those length scales for future developments.

To couple machine learning with microswimmers, we used a light-controlled self-thermophoretic microswimmer with surface-attached gold nanoparticles (Fig. 1A and see the Supplementary Materials). For self-propulsion, the swimmer has to break the time symmetry of low Reynolds number hydrodynamics (33). This is achieved by an asymmetric illumination of the particle with laser light of 532-nm wavelength. It is absorbed by the gold nanoparticles and generates a temperature gradient along their surface, inducing thermo-osmotic surface flows and lastly resulting in a self-propulsion of the microswimmer suspended in water. The direction of propulsion is set by the vector pointing from the laser position to the center of the particle. The asymmetric illumination is maintained during the particle motion by following the swimmers position in real time and steering the heating laser (see the Methods section below). As compared with other types of swimmers (28, 34, 35), this symmetric swimmer removes the time scale of rotational diffusion from the swimmers motion and provides an enhanced steering accuracy (36, 37) (see the Supplementary Materials).

(A) Sketch of the self-thermophoretic symmetric microswimmer. The particles used have an average radius of r = 1.09 m and were covered on 30% of their surface with gold nanoparticles of about 10 nm diameter. A heating laser illuminates the colloid asymmetrically (at a distance d from the center), and the swimmer acquires a well-defined thermophoretic velocity v. (B) The gridworld contains 25 inner states (blue) with one goal at the top right corner (green). A set of 24 boundary states (red) is defined for the study of the noise influence. (C) In each of the states, we consider eight possible actions in which the particle is thermophoretically propelled along the indicated directions by positioning the laser focus accordingly. (D) The RL loop starts with measuring the position of the active particle and determining the state. For this state, a specific action is determined with the greedy procedure (see the Supplementary Materials for details). Afterward, a transition is made, the new state is determined, and a reward for the transition is given. On the basis of this reward, the Q-matrix is updated, and the procedure starts from step 1 until an episode ends by reaching the goal or exiting the gridworld to a boundary state.

To show RL with a real-world microscopic agent, we refer to the standard problem of RL, the gridworld. The gridworld problem allows us to have an experimental demonstration while being able to access the problem numerically. We coarse grain a sample region of 30 m by 30 m into a gridworld of 25 states (s, 5 5), each state having a dimension of 6 m by 6 m (Fig. 1B). One of the states is defined as the target state (goal), which the swimmer is learning to reach. The gridworld is surrounded by 24 boundary states according to Fig. 1B. The obtained real-time swimmer position is used to identify the state s in which the swimmer currently resides. To move between states, we define eight actions a. The actions are carried out by placing the heating laser at the corresponding position on the circumference of the particle (see Fig. 1C). A sequence of actions defines an episode in the gridworld, which ends when the swimmer either leaves the gridworld to a boundary state or reaches the target state. During an episode, rewards or penalties are given. Specifically, the microswimmer gets a reward once it reaches the target state and a penalty in other cases (see the Supplementary Materials for details on the reward definitions). The reward function R thus only depends on the state s, i.e., R = R(s).

We have implemented the model-free Q-learning algorithm to find the optimal policy that solves the navigation problem (38). The gained experience of the agent is stored in the Q-matrix (10), which tracks the utilities of the different actions a in each state s. When the swimmer transitions between two states s and s (see the Supplementary Materials for details on the choice of the next state), the Q-matrix is updated according toQt+t(s,a)=Qt(s,a)+[R(s)+maxaQt(s,a)Qt(s,a)](1)taking into account the reward R(s) of the next state, the utility of the next state Qt(s, a) after taking the best action a, and the current utility Qt(s, a). The influence of these values is controlled by two factors, the learning rate and the discount factor . The learning rate defines the fraction at which new information is incorporated into the Q-matrix, and the discount factor determines the value of future events into the learning process. The reward function is the only feedback signal that the system receives to figure out what it should learn. The result of this RL procedure is the optimal policy function *(s) a, which represents the learned knowledge of the system, *(s) = argmaxaQ(s, a), Q(s,a)=limtQt(s,a). Figure 1D highlights the experimental procedure of actuating the swimmer and updating the Q-matrix. As compared with computer models solving the gridworld with deterministic agents, there are four important differences to note. (i) The swimmer can occupy all positions within each state of 6 m by 6 m size. It can be arbitrarily close to the boundary. (ii) The swimmer moves in several steps through each state before making a transition. A swimmer velocity of v = 3 m s1 leads to a displacement of about 6 m within 2 s, corresponding to about 11 frames at an inverse frame rate texp = 180 ms until a transition to the next state is made. (iii) The new state after a transition does not have to be the state that was targeted by the actions. The microswimmers are subject to Brownian motion with a measured diffusion coefficient of D = 0.1 m2 s1. The trajectory is therefore partially nondeterministic. With this respect, the system we consider captures a very important feature of active matter on small length scales that is inherent to all microscopic biological systems, where active processes have been optimized to yield robust functions in a noisy background. (iv) Due to a time delay in the feedback loop controlling the active particles, the action applied to the swimmer is not determined from its present position but from its position in the past, which is a common feature for all living and nonliving responsive systems.

Figure 2 summarizes the learning process of our microswimmer for boundary states with R = 0 and a velocity of v = 3.0 m s1, v = r e /texp where r e is the mean projected displacement of the swimmer along the direction of the action e. Over the course of more than 5000 transitions (more than 400 episodes, about 7 hours of experiment), the sum of all Q-matrix entries converges (Fig. 2A). During this time, the mean number of transitions to reach the goal state decreases from about 600 transitions to less than 100 transitions (Fig. 2B). Accordingly, the trajectories of the swimmer become more deterministic, and the swimmer reaches the goal state independent of the initial state (Fig. 2C and inset). As a result of the learning process, the initial random policy is changing into a policy driving the swimmer toward the goal state. In this respect, the final policy provides an effective drift field with an absorbing boundary at the goal state (Fig. 2D). During this process, which correlates the actions of neighboring cells, the average projected velocity v causing the drift toward the goal also increases. Although the obtained policy is reflecting the best actions only, the Q-matrix shown in Fig. 2E provides the cumulative information that the swimmer obtained on the environment. It delivers, for example, also information on how much better the best action in a state has been as compared with the other possible actions. The representation in Fig. 2E encodes the Q-matrix value in the brightness of eight squares at the boundary of each state (center square has no meaning). Brighter colors thereby denote larger Q-matrix value.

(A) Learning progress for a single microswimmer in a gridworld at a velocity of v = 3.0 m s1. The progress is quantified by the sum of all Q-matrix elements at each transition of the learning process. The Q-matrix was initialized randomly. The shaded regions denote a set of 25 episodes in the learning process, where the starting point is randomly chosen. (B) Mean number of steps required to reach the target when starting at the lower left corner as the number of the learning episodes increases. (C) Different examples of the behavior of a single microswimmer at different stages of the learning process. The first example corresponds to a swimmer starting at the beginning of the learning process at an arbitrary position in the gridworld. The trajectory is characterized by a large number of loops. With an increasing number of learning episodes, the trajectories become more persistent in their motion toward the goal. This is also reflected by the decreasing average number of steps taken to reach the goal [see (B)]. The inset in the rightmost graph reveals trajectories from different starting positions. (D) Policies (s) = argmaxaQt(s, a) defined by the Q-matrix before (Qt(s, a) = Q0(s, a)) and after (Qt(s, a) = Q(s, a)) the convergence of the learning process. (E) Color representation of the initial and the final Q-matrix for the learning process. The small squares in each state represent the utility of the corresponding action (same order as in Fig. 1C) given by its Q-matrix entry, except for the central square. Darker colors show smaller utility, and brighter colors show a better utility of the corresponding action.

Because our gridworld is overlayed to the real-world sample, we may also define arbitrary obstacles by providing penalties in certain regions. Figure 3 (A and B) shows examples for trajectories and policies where the particles have been trained to reach a goal state close to a virtual obstacle. Similarly, real-world obstacles can be inserted into the sample to prevent the particle from accessing specific regions and thus realizing certain actions. More complex applications can involve the emergence of collective behavior, where the motion of multiple agents is controlled simultaneously (30). Different levels of collective and cooperative learning may be addressed (14, 39). A true collective learning is carried out when the swimmer is taking an action to maximize the reward of the collective, not only its individual one. Swimmers may also learn to act as a collective when positive rewards are given if an agent behaves like others in an ensemble (17). This mimics the process of developing swarming behavior implicated, for example, by the Vicsek model (40). Our control mechanism is capable of addressing multiple swimmers separately such that they may also cooperatively explore the environment. Instead of a true collective strategy, we are considering a low density of swimmers (number of swimmers number of states), which share the information gathered during the learning process by drawing their actions from and updating the same Q-matrix. The swimmers are exploring the same gridworld in different spatial regions, and thus, a speedup of the learning is expected. Figure 3C displays the trajectories of two particles sharing the same Q-matrix, which is updated in each learning step. As a result, the learning speed is enhanced (Fig. 3D). The proposed particle control therefore provides the possibility to explore a collective learning or the optimization of collective behavior and thus delivers an ideal model system with real physical interactions.

(A) Example trajectories for a learning process with a virtual obstacle (red square, R = 100) next to the goal state (R = 5) in the center of the gridworld. (B) Example trajectory for an active particle that has learned to reach a goal state (R = 5) behind a large virtual obstacle (red rectangle, R = 100). (C) Example trajectories for two particles sharing information during the learning process. The same rewards as in Fig. 2 have been used. (D) Sum of all Q-matrix elements at each transition comparing the learning speed with two particles sharing the information. In all the panels, the active particle speed during the learning process has been v = 3.0 m s1.

A notable difference between macroscopic agents, like robots, and microscopic active particles is the Brownian motion of microswimmers. There is an intrinsic positional noise present in the case of active particles, which is also of relevance for small living organisms like bacteria, cells, and all active processes on microscopic length scales. The advantage of the presented model system, however, is that the influence of the strength of the noise can be explored for the adaption process and the final behavior, whereas this is difficult to achieve in biological systems.

The importance of the noise in Brownian systems is commonly measured by the Peclet number, Pe = rv/2D, comparing the product of particle radius r and the deterministic particle displacement vt to the corresponding square displacements by Brownian motion 2Dt. To explore the influence of the noise strength, we change the speed of the active particle v, whereas the strength of the noise is given by the constant diffusion coefficient D. We further introduce a penalty in the boundary states R = 100 to modify the environment in a way that the influence of noise can introduce quantitative consequences for the transitions.

When varying the speed v between 2 and 5 m s1, we make four general observations. (i) Due to time delay in the feedback loop controlling the particles, the noise influence depends on the particle speed nonmonotonously (Fig. 4E and the Supplementary Materials). As a result, we find an optimal particle speed for which the noise is least important, as discussed in more detail in the following section. For the parameters used in the experiment, the optimal velocity is close to the maximum speed available. When increasing the speed in the limited interval of the experiment, the importance of the noise thus decreases. (ii) The Q-matrix converges considerably faster for higher particle speeds corresponding to a lower relative strength of the noise. This effect is intuitive because the stronger the noise, the lower the correlation between action and desired outcome. Figure 4A shows the convergence of the sum of the Q-matrix elements (summed over all entries for a given transition) for different microswimmer speeds (v = 2.8 m s1, v = 4.0 m s1, and v = 5.1 m s1). Although the sum reaches 50% after 250 transitions for the highest velocity, this requires almost 10 times more transitions at about half the speed. (iii) The resulting optimal policy depends on the noise strength. In Fig. 4B, we show the policies obtained for two different velocities (v = 1.6 m s1 and v = 4.6 m s1). Differences in the two policies are, in particular, visible in the states close to the boundary. Most of the actions at the top and right edge of the low-velocity policy point inward, whereas actions parallel to the edge are preferred at the higher velocity (see highlighted regions in Fig. 4, B and C). (iv) The contrast between the best action and the average of the other actions, which we take as a measure of the decision strength, is enhanced upon increasing importance of the noise. This contrast for a given state sk is measured byG(sk)=1{Q(sk,ab)Q(sk,ai)i}(2)where ab denotes the best action for the state and Q(sk,ai)i=i=18Q(sk,ai)/8. The result is normalized by a factor to make the largest contrast encoded in the color of the states in Fig. 4B equal to one.

(A) Sum of the Q-matrix elements as a function of the total number of transitions during the learning process. The different curves were obtained for learning with three different microswimmer speeds. (B) Policy obtained from learning processes at high noise (low velocity) (1 : v = 1.6 m s1) and low noise (high velocity) (2 : v = 4.6 m s1). The coloring of the states corresponds to the contrast between the value of the best action and the average of all other actions (Eq. 2). (C) Transition probabilities used in Bellmans Eq. 3 for diagonal and nondiagonal actions as determined from experiments with 500 trajectories for a velocity of 1.6 and 4.6 m s1. The blue lines indicate example experimental trajectories, which yield equivalent results for actions a2, a4, a5, a7 (top) and a1, a3, a6, a8 (bottom). The blue dots mark the first point outside the grid cell. The histograms to the right show the percentage arriving in the corresponding neighboring states. The numbers below denote the percentages for the two velocities (value in parentheses for higher velocity). (D) Origin of directional uncertainty. The green dots indicate the possible laser position due to the Brownian motion of the particle within the delay time t. The two graphs to the right display the experimental particle displacements of a single microswimmer within the delay time t = texp = 180 ms, when starting at the origin for two different particle velocities. (E) Variances of the point clouds in (D) parallel and perpendicular to the intended direction of motion. The dashed lines correspond to the theoretical prediction according to Eq. 4 for the perpendicular motion (2) and 2=2Dt+(cosh(2)1)v2t2 for the tangential motion with 20.23rad2, D = 0.1 m2 s1, and t = t = 180 ms. (F) Survival fraction of particles moving in the upper states at the boundary toward the goal state in policy 2 indicated in the inset. The survival has been determined from simulations for the same parameters as in (E).

Because the environment (gridworld with its rewards) stays constant for all learning processes at different velocities, all our above observations for varying particle speed are related to the importance of the noise strength. According to Bellmans equation (10)Q(s,a)=sP(ss,a)[R(s)+maxaQ(s,a)](3)

the influence of the noise on the learning process is encoded in the transition probabilities P(ss, a), i.e., the probabilities that an action a in the state s leads to a transition to the state s. This equation couples the element Q(s, a) of the optimized Q-matrix, corresponding to a state s and action a, with the discounted elements *(s)=maxaQ(s,a) of the optimal policy in the future states s and the corresponding future rewards R(s), weighted by transition probabilities P(sa, s). Using this equation, one can obtain the Q-matrix and the optimal policy by a Q-matrix value iteration procedure if the transition probabilities are known. The transition probabilities thus contain the physics of the motion of the active particle, including the noise, and decide how different penalties or rewards of the neighboring states influence the value of Q.

We have measured the transition function for the two types of transitions (diagonal and nondiagonal) using 500 trajectories in a single grid cell. To obtain the transition function, we set the starting position of all the trajectories to the center of the grid cell, carried out the specific action, and determined the state in which the particle trajectory ended up. The results are shown in Fig. 4C with exemplary trajectories and a histogram to the right. The numbers below the histograms show the corresponding transition probabilities to the neighboring state in percent for a velocity of v = 1.6 m s1 (v = 4.6 m s1 for the values in parentheses). The two velocities show only weak changes in the transition probabilities for the nondiagonal actions, which appear to be responsible for the changes in the policies in Fig. 4B. Carrying out a Q-matrix value iteration confirms the changes in the policy in the marked regions for the measured transition probability range (see the Supplementary Materials).

The advantage of our experimental system is that we can explore the detailed physical behavior of each microswimmer in dedicated experiments. To this end, we find two distinct influences of the Brownian motion as the only noise source on the microswimmers motion. Figure 4D shows the distribution of microswimmer displacement vectors within a time texp = 180 ms for two different velocities. Each displacement starts at the origin, and the point cloud reflects the corresponding end points of the displacement vectors. With increasing velocity, the particles increase their step length in the desired horizontal direction. The mean distance corresponds to the speed of the particle, and the end points are located close to a circle. At the same time, a directional uncertainty is observed where the angular variance 2 is nearly constant for all speeds (see the Supplementary Materials for details). This directional noise is the result of a delayed action in the experiments (30, 41), i.e., a time separation between sensing (imaging the position of the particle) and action on the particle position (placing the laser for propulsion). Both are separated by a delay time t, which is the intrinsic delay of the feedback loop (t = texp = 180 ms in our experiments). A delayed response is a very generic feature of all active responsive systems, including biological species. In the present case of a constant propulsion speed, it leads to an anisotropic noise. In the direction perpendicular to the intended action, the Brownian noise gets an additional component that is increasing nonlinearly with the particle speed, whereas the noise along the intended direction of motion is almost constant (Fig. 4E).

The increase in the variance perpendicular to the direction of motion can be analyzed with a simple model (see the Supplementary Materials for details), which yields2=v2tsinh(2)t+2Dt(4)and corresponds well with experimental data (Fig. 4E) for 20.23rad2 and fixed time t = t. In particular, it captures the nonlinear increase of 2 with the particle speed v.

The increase has important consequences. When considering the motion in the top four states of policy 2 (Fig. 4B), the particle would move horizontally toward the goal starting at an arbitrary position in the leftmost state. From all trajectories that started, only a fraction will arrive at the goal state before leaving these states through the upper, lower, or left boundaries of those four states. This survival fraction has been determined from simulations (also see the Supplementary Materials for an approximate theoretical description). Overall, a change between the two policies 1 and 2 is induced by an increase of the survival by less than 10% when going from v = 1.6 m s1 to v = 4.6 m s1. When further increasing the velocity, we find in simulations that an optimal velocity for maximum survival exists. This maximum corresponds to the minimumvopt=2Dsinh(2)t(5)in the variance (Eq. 4) for a fixed traveled distance a = vt, which only depends on the diffusion coefficient D, the angular variance 2, and the sensorial delay t (see the Supplementary Materials for details). In the limit of instantaneous actions (t = 0), an infinitely fast motion would yield the best results. Any nonzero delay will introduce a speed limit at which a maximum survival is ensured. We expect that the optimal policy for very high velocities should yield a similar policy as for low velocities. An experimental verification of this conjecture is currently out of reach, as Fig. 4F shows the results of the simulations.

The observed behavior of the survival probability, which exhibits a maximum for a certain particle velocity, implies that the probability to reach the target is maximal for the same optimal velocity. Moreover, because the underlying analysis is solely based on the competition of two noises omnipresent in (Brownian) active matter, namely the diffusion and the uncertainty in choosing the right direction, we conjecture that the observed type of behavior is universal. The precision of reaching the target (long time variance of the distance from the target) by the run-and-tumble motion of bacteria exhibits a minimum as a function of the run-and-tumble times (42, 43) reminiscent of our results. These results also demonstrate that the combination of machine learning algorithms with real-world microscopic agents can help to uncover physical phenomena (such as time delay in the present work), which play important roles in the microscopic motion of biological species.

Concluding, we have demonstrated RL with a self-thermophoretic microswimmer carrying out actions in a real-world environment with its information processing and sensing capabilities externalized to a computer and a microscopy setup. Already with this hybrid solution, one obtains a model system, where strategies in a noisy environment with virtual obstacles or collective learning can be explored. Although our simple realization of a gridworld is based on a global position detection defining the state of the swimmer, future applications will consider local information, e.g., the response to a temporal sequence of local physical or chemical signals, to allow for navigation in unknown environments. As compared with a computer simulation, our system contains a nonideal control limited by the finite reaction time of the feedback loop, presence of liquid flows, imperfections of the swimmers or sample container, hydrodynamic interactions, or other uncontrolled parameters that naturally influence the learning process. In this way, it resembles a new form of computer simulation using real-world agents. An important advantage is that the physics of the agent can be explored experimentally in detail to understand the learned strategies, and the real-world interactions in more complex environments can be used to adapt the microswimmers behavior. In that sense, even the inverse problem of using the learned strategy to reveal the details of these uncontrolled influences may be addressed as a new form of environmental sensing. Similarly, the control of active particles by machine learning algorithms may be used in evolutionary robotics (8, 44), where the interaction of multiple particles may be optimized to yield higher-order functional structures based on environmental interactions. Although the implementation of signaling and feedback by physical or chemical processes into a single artificial microswimmer is still a distant goal, the current hybrid solution opens a whole branch of new possibilities for understanding adaptive behavior of single microswimmers in noisy environments and the emergence of collective behavior of large ensembles of active systems.

Samples consisted of commercially available gold nanoparticlecoated melamine resin particles of a diameter of 2.19 m (microParticles GmbH, Berlin, Germany). The gold nanoparticles were covering about 30% of the surface area and were between 8 and 30 nm in diameter (see the Supplementary Materials for details.) Microscopy glass cover slides were dipped into a 5% Pluronic F127 solution, rinsed with deionized water, and dried with nitrogen. The Pluronic F127 coating prevented sticking of the particles to the glass cover slides. Two microliters of particle suspension was placed on the cover slides to spread about an area of 1 cm by 1 cm, forming a 3-m-thin water film. The edges of the sample were sealed with silicone oil (polydimethylsiloxane) to prevent water evaporation.

Samples were investigated in a custom-built inverted dark-field microscopy setup based on an Olympus IX-71 microscopy stand. The sample was held by a Piezo stage (Physik Instrumente) that was mounted on a custom-built stepper stage for coarse control. The sample was illuminated by a halogen lamp (Olympus) using a dark-field oil-immersion condenser [Olympus, numerical aperture (NA), 1.2]. The scattered light was collected by an oil-immersion objective lens (Olympus, 100, NA 1.35 to 0.6) with the NA set to 0.6 and captured with an Andor iXon emCCD camera. A = 532 nm laser was focused by the imaging objective into the sample plane to serve as a heating laser for the swimmers. Its position in the sample plane was steered by an acousto-optic deflector (AOD; AA Opto-Electronic) together with a 4-f system (two f = 20 cm lenses). The AOD was controlled by an ADwin realtime board (ADwin-Gold, Jger Messtechnik) exchanging data with a custom LabVIEW program. A region of interest of 512 pixels by 512 pixels (30 m by 30 m) was used for the real-time imaging, analysis, and recording of the particles, with an exposure time of texp = 180 ms. The details of integrating the RL procedure are contained in the Supplementary Materials.

robotics.sciencemag.org/cgi/content/full/6/52/eabd9285/DC1

Fig. S1. Symmetric swimmer structure.

Fig. S2. Swimmer speed as a function of laser power.

Fig. S3. Directional noise as function of the swimming velocity measured in the experiment.

Fig. S4. Directional noise model.

Fig. S5. Results of the analytical model of the influence of the noise.

Fig. S6. Q-matrix value iteration result.

Movie S1. Single-swimmer free navigation toward a target during learning.

Movie S2. Single-swimmer free navigation toward a target after learning.

Movie S3. Navigation toward a target with virtual obstacles.

Movie S4. Multiple-swimmer free navigation toward a target.

J. K. Parrish, W. M. Hamner, Animal Groups in Three Dimensions (Cambridge Univ. Press, 1997).

J. Kober, J. Peters, Reinforcement learning in robotics: A survey, in Learning Motor Skills (Springer Tracts in Advanced Robotics, 2014), vol. 97, pp. 967.

M. Wiering, M. v. Otterlo, Reinforcement Learning, in Adaptation, Learning, and Optimization (Springer Berlin Heidelberg, 2012), vol. 12.

R. S. Sutton, A. G. Barto, Reinforcement Learning: An Introduction (MIT Press, 1998).

J. C. H. Watkins, thesis, Kings College, Cambridge (1989).

L. Busoniu, R. Babuka, B. De Schutter, Multi-agent reinforcement learning: A survey, in Proceedings of the 9th International Conference on Control, Automation, Robotics and Vision (ICARCV 2006) (Singapore, 2006), pp. 527532.

Acknowledgments: Helpful discussion with P. Romanczuk is acknowledged in pointing out observations of directional noise for biological systems. Fruitful discussion and help with extrapolating the theory to the experiments by K. Ghazi-Zahedi are acknowledged. We thank A. Kramer for helping to revise the manuscript. Funding: The authors acknowledge financial support by the DFG Priority Program 1726 Microswimmers through project 237143019. F.C. is supported by the DFG grant 432421051. V.H. is supported by a Humboldt grant of the Alexander von Humboldt Foundation and by the Czech Science Foundation (project no. 20-02955J). Author contributions: F.C. conceived the research. S.M.-L. and F.C. designed the experiments. S.M.-L. implemented the system, and S.M.-L. and A.F. performed the experiments. S.M.-L., V.H., and F.C. analyzed and discussed the data. F.C., V.H., and S.M.-L. wrote the manuscript. Competing interests: The authors declare that they have no competing financial interests. Data and materials availability: All data needed to evaluate the conclusions are available in the paper or in the Supplementary Materials. Additional data and materials are available upon request.

Read more here:

Reinforcement learning with artificial microswimmers - Science

Posted in Robotics | Comments Off on Reinforcement learning with artificial microswimmers – Science