Page 71«..1020..70717273..8090..»

Category Archives: Ai

AI And Optimism: Jim Mellon Wants Us All To Live Longer – Forbes

Posted: January 27, 2022 at 11:53 pm

Jim Mellon, highly successful investor, and longevity pioneer

Jim Mellon is an optimist. Which is just as well, since he is one of the people trying to engineer a complete transformation in attitudes towards aging attitudes within the medical profession, among the public at large, and crucially, in the investment community.

As an example of his optimism, Mellon says that, thanks largely to the vertiginous advances in artificial intelligence, if you can stay alive for another ten to twenty years, and if you arent yet over 75, and if you remain in reasonable health for your age, you have an excellent chance of living to over 110 years old. And he means living to 110 in very good health. Not dribbling or drooling. For the investment community, he argues that the incremental addition of 30 years or so to average lifespans over the next two or three decades will represent the single greatest investment opportunity in recorded history.

Its a big claim, but Mellon already has some remarkable achievements to his name. His optimism served him well in his career as an investor before he became interested in longevity. He prospered in financial services in the 1980s, but his first really big break came during a business trip to Russia in 1994. Realising that shares in state companies were being sold by Russian citizens at a massive discount, he arranged for a suitcase containing $2 million in cash to be flown in from London, and in the course of a couple of days he spent the lot. A few weeks later the shares were worth $17 million. Today, Mellon is based in the Isle of Man, where he is the biggest landowner, and he shuttles between homes there and in Ibiza, Hong Kong, and elsewhere.

He has also dabbled in politics, where he has the dubious distinction of being the person who introduced Nigel Farage to Arron Banks. Mellons optimism is even strong enough to allow him to think that Brexit might still turn out well.

But he abandoned politics some years ago, and now concentrates on investing, and in particular, investing in anti-aging technology. He has been a leading biotech investor for two decades, and since the Big Bang in AI in 2012, when Geoff Hinton finally got the backpropagation algorithm to work properly, and thus gave birth to deep learning, Mellon has seen the enormous potential of applying modern AI to longevity.

In 2016 he co-founded Juvenescence, a venture capital and development company focused on modifying aging and increasing human health span and longevity. Juvenescence raised $50 million in 2018 and another $100 million in 2019.

In April this year it launched its first consumer product, Metabolic Switch - pills which raises the level of ketones in the blood. Ketones are molecules produced by the body when it uses fat for fuel instead of sugar which is a good thing. The usual way to achieve this is with a calorie restrictive diet, which most of us struggle to maintain.

Metabolic Switch is only available in the US at the moment, but no doubt there will be a few bottles on Mellons bathroom shelves. Understandably, Mellon is coy about what other anti-aging therapies he uses: we humans are all different, and what works for one can be harmful to another, so recommendations are perilous. But he does comment that just about every person he has met in the longevity space takes metformin, a treatment for diabetes which is currently in clinical trials for use as a general anti-aging drug.

Juvenescence was also an early investor in a company called Insilico, founded by Alex Zhavoronkov, which is one of the foremost exponents of the use of cutting-edge AI in longevity research namely deep learning and Generative Adversarial Networks (GANs), which pit two AI systems against each other in a competition to produce the best solution to a problem. Insilico works with mainstream pharmaceutical firms, and when Insilico span out a subsidiary, called Deep Longevity, which focuses specifically on longevity research, it was acquired by another of Mellons investment vehicles, Hong Kong-based Regent Pacific.

Mellons 2017 book on longevity (also called Juvenescence) explains his ambition for the space. Traditionally, when people get old they become illderly, because our role in life is to learn, then earn, then retire and expire. Now, to quote the anthropologist Ashley Montagu, the idea is to die young - as late as possible.

There is nothing inevitable about aging, or about its rate. Californian bristlecone pines are believed to live for 5,000 years, and there are long-lived mammalian creatures as well. Some marine creatures do not display any signs of aging at all, including hydra, jellyfish, planarian worms, and coral. Certain human cells have immortal characteristics too. When a woman gives birth, she produces a baby which is new. Her germline (reproduction-related) cells produce a child with no signs of age.

These and many other considerations combine with the unreasonable effectiveness of modern AI to lead some people to believe that significant advances in longevity are imminent. These advances probably cannot happen without the active participation of the wider pharmaceutical industry, and the acceptance by policy makers and regulators that aging is a disease, not just an unfortunate and inevitable component of the human condition. There is still considerable reluctance among major pharmaceutical companies to contemplate specific anti-aging therapeutic developments. But there are encouraging signs of this reluctance being challenged, especially at Novartis and AstraZeneca.

Beyond the pharma giants, Mellon reckons there are 255 companies which claim to be specifically targeting aging, of which 35 are listed on stock markets. But he thinks that only a minority of them are genuinely working to tackle aging, as opposed to one of the diseases it causes, like cancer, dementia, or heart disease. He likens the state of the longevity industry today to the internet industry of 20 years ago, when it was still in its dial-up phase, and downloading information (or, heaven forbid, images) was like sucking jelly through a straw. And although longevity will have such a massive impact on all of us that you might expect progress to be expedited, Mellon points out that the internet did not have to go through lengthy and expensive FDA trials at every step.

To help speed things along, he is involved with the launch of an industry association towards the end of this year. Members of the longevity industry are unusually collaborative and open with each other. The opportunity is vast, and it is far more important to increase the size of the pie than to squabble over shares in it.

Mellon is also funding longevity research at his alma mater, Oriel College in Oxford University. (Disclaimer: its also mine.) The university is a powerhouse for many kinds of science, but longevity is not prominent among them. The Mellon Longevity Science Programme helps fund the research of Professor Lynne Cox into the senescence of the human immune system.

What does the optimistic Mr Mellon think is possible within the next few decades? If the green shoots visible in the pharmaceutical and financial communities continue to grow, he thinks that by 2035 we could have some drugs in wide circulation with significant, proven anti-aging impacts. In addition, the first gene therapies could be starting to prove their value, although this may be a harder sell to the wider public. By 2050 it may be evident that some people are going to live to 150 years.

We live in interesting times.

Read more:

AI And Optimism: Jim Mellon Wants Us All To Live Longer - Forbes

Posted in Ai | Comments Off on AI And Optimism: Jim Mellon Wants Us All To Live Longer – Forbes

Toward Ethical and Equitable AI in Higher Education | Beyond Transfer – Inside Higher Ed

Posted: at 11:53 pm

As the higher education sector grapples with the new normal of the post-pandemic, the structural issues of the recent past not only remain problematic but have been exacerbated by COVID-related disruptions throughout the education pipeline. Navigating the complexity of higher education has always been challenging for students, particularly at underresourced institutions that lack the advising capacity to provide guidance and support. Areas such as transfer and financial aid are notorious black boxes of complexity, where students lacking financial resources and college knowledge are too often left on their own to make decisions that may prove costly and damaging down the line.

The educational disruptions that many students have faced during the pandemic will likely deepen this complexity by producing greater variations in individual students levels of preparation and academic histories, even as stressed institutions have less resources to provide advising and other critical student services. Taken together, these challenges will make it all the more difficult to address the equity gaps that the sector must collectively solve.

While not a panacea, recent advances in artificial intelligence methodologies such as machine learning can help to alleviate some of the complexity that students and higher education institutions face. However, researchers and policy makers should proceed with caution and healthy skepticism to ensure that these technologies are designed and implemented ethically and equitably. This is no easy task and will require sustained, rigorous research to complement the rapid technology advances in the field. While AI-assisted education technologies offer great promise, they also pose a significant risk of simply replicating the biases of the past. In the summary below, we offer a brief example drawn from recent research findings that illustrate the challenges and opportunities of equitable and ethical AI research.

Machine learningbased grade prediction has been among the first applications of AI to be adopted in higher education. It has most often been used in early-warning detection systems to flag students for intervention if they are predicted to be in danger of failing a course, and it is starting to see use as part of degree pathway advising efforts. But how fair are these models with respect to the underserved students these interventions are primarily designed to support? A quickly emerging research field within AI is endeavoring to address these types of questions, with education posing particularly nuanced challenges and trade-offs with respect to fairness and equity.

Generally, machine learning algorithms are most accurate in predicting that which they have seen the most of in the past. Consequently, with grade prediction, they will be more accurate at predicting the groups of students who produce the most common grade. When the most common grade is high, this will lead to perpetuating inequity, where the students scoring lower will be worst served by the algorithms intended to help them. This was observed in a recently published study out of the University of California, Berkeley, evaluating predictions of millions of course grades at the large public university. Having the model give equal attention to all grades led to better results among underserved groups and more equal performance across groups, though at the expense of overall accuracy.

While addressing race and bias in a predictive model is important, doing so without care can exacerbate inequity. In the same study, adding race as a variable to the model without any other modification led to the most unequal, and thus least fair, performance across groups. Researchers found that the fairest result was achieved through a technique called adversarial learning, an approach that teaches the model not to recognize race and adds a machine learning penalty when the model successfully predicts race based on a students input data (e.g., course history). Researchers also attempted to train separate models for each group to improve accuracy; however, information from all students always benefited prediction of every group compared to only using that groups data.

These findings underscore the challenges in designing AI-infused technologies that promote rather than undermine the student success objectives of an institution. Further work is needed to develop additional best practices to address bias effectively and to promote fairness in the myriad of educational scenarios in which machine learning could otherwise contribute to the widening of equity gaps.

The State University of New York and UC Berkeley have launched a partnership to take on these challenges and advance ethical and equitable AI research broadly in higher education. The first project of the partnership will be applied to the transfer space, where we will be quantifying disparities in educational pathways between institutions based on data infrastructure gaps, testing a novel algorithmic approach to filling these gaps and developing policy recommendations based on the results. While this project represents an incremental step, we look forward to advancing this work and welcome partnerships with individuals and organizations with similar interests and values.

To connect with us, reach out: Dan Knox and Zach Pardos.

Read more from the original source:

Toward Ethical and Equitable AI in Higher Education | Beyond Transfer - Inside Higher Ed

Posted in Ai | Comments Off on Toward Ethical and Equitable AI in Higher Education | Beyond Transfer – Inside Higher Ed

SparkCognition, which develops AI solutions for a range of industries, nabs $123M – VentureBeat

Posted: at 11:53 pm

Did you miss a session from the Future of Work Summit? Head over to ourFuture of Work Summit on-demand libraryto stream.

As a result of pandemic headwinds and the general trend toward automation, the industrial sector is increasingly piloting AI technologies across different lines of business. According to a Deloitte survey on AI adoption in manufacturing,93% of companies believe that AI will be a pivotal technology to drive growth and innovation in the future. Illustrating the transformation, a McKinsey report found that 15% of manufacturing companies now use AI to optimize key areas of production such as yield, energy, or throughput optimization, up from 9% in 2018.

But stumbling blocks stand in the way of successful AI deployment in industrial applications. For example, Chinese companies responding to the above-mentioned Deloitte poll said that91% of their AI projects failed to meet expectationseither in terms of their benefits or time invested. Among the biggest obstacles cited were infrastructure limitations, poor data collection practices and quality, a lack of engineering experience, and excessively large project scale and complexity.

AI-focused consultancies have emerged recently to assist industrial as well as oil and gas, renewables, financial services, transportation, and government organizations in implementing AI technologies. Fractal Analytics, Tata Consultancy Services, Wipro, Tredence, LatentView, and Mu Sigma occupy a growing category of AI-as-a-service companies that work with enterprises to develop AI solutions customized for their organizations. So does SparkCognition, an Austin, Texas-based firm that uses AI to analyze, optimize, and learn from customers data to predict future outcomes, optimize processes, and ostensibly prevent cyberattacks.

In a show of the markets strength, SparkCognition today announced that it raised $123 million in series D funding at a $1.4 billion valuation led by March Capital, Doha Venture Capital, B. Riley Venture Capital, AEI Horizon X, Temasek, Alan Howard, and Peter Lscher, bringing the companys total raised to $300 million. The financing follows a record year of growth for SparkCognition, with revenue increasing 90% year over year and booking climbing by five times.

SparkCognition was founded in 2013 by Amir Husain. Husain previously launched Kurion, which created branded web portals for companies including Barnes and Noble, Dun & Bradstreet, and financial services institutions. He then led development of desktop computing products at ClearCube before joining virtualization services company VDIworks as CTO.

[T]he pandemic has heightened our customers understanding of the value AI can deliver. In the face of supply chain uncertainties, fluctuating demand for resources like oil and gas, and increased remote work, it is never more critical to have advanced analytics predicting failures and cyberattacks before they occur, flagging operational inefficiencies, and identifying opportunities for increased production. This ultimately impacts our customers bottom line, helping them see business growth even in uncertain times, Husain told VentureBeat via email.

SparkCognition provides a range of services developed to overcome particular data science hurdles in organizations. For instance, the companys Darwin tool abstracts away many of the steps in developing and maintaining AI models, including data preparation and cleansing. Husain claims that Darwin can uncover problems like missing data while suggesting solutions to problems in an AI training dataset, such as malformed or missing data. Darwin can also ostensibly deliver explainable model results that spotlight important aspects of a dataset, he says.

On the cybersecurity side, SparkCognition offers DeepArmor, which leverages AI to attempt to mitigate executable-based cyberattacks. Meanwhile, the companys DeepNLP service automates workflows of unstructured data to simplify tasks like information retrieval, document classification, and analytics. SparkCognitions SparkPredict and Ensemble are AI-powered asset management and predictive maintenance platforms, built to detect suboptimal production yields and equipment failures proactively. Rounding out the product portfolio is Maana, which aims to encode organizations institutional knowledge, and an AI-powered market trading platform called Orca.

Enterprises are faced with data overload, and 90% of data available is unstructured, which traditionally requires an extraordinary amount of manual effort to sort through and extract insights [B]ut echnologies like the solutions we offer use machine learning and natural language processing to expedite that process significantly, Husain said. We take an end-to-end approach, leveraging technology like artificial intelligence, machine learning, deep learning, natural language processing, and knowledge representation. We deliver these solutions in a user-friendly interface that quickly and clearly provides insights and alerts when a process or asset needs attention.

Three-hundred-employee SparkCognition positioned itself for growth last year, acquiring three companies and expanding into the financial services, maritime and renewable energy markets. According to Husain, SparkCognition which has 65 customers helped a major power generation company spot an anomaly that enabled critical event detection a month in advance, helping that company to avoid costly repairs. It also worked with a beverage manufacturer to address water waste and leaks, Husain said reducing consumption of the resources throughout the manufacturing plant.

This additional capital will enable us to deepen our subject matter expertise, enhance our patented portfolio, and accelerate the diversity of problems we solve for customers, maximizing their return on investment, Husain continued.

But surveys show that organizations struggle to derive value from their AI deployments representing an existential threat to SparkCognitions business. For example, a 2018 report from 451 Research found that the majority of AI early adopters have failed to define key performance indicators around their AI and machine learning initiatives and encountered technical limitations while operationalizing data.

In more sobering metrics, 76% of organizations in a 2020 PricewaterhouseCoopers-sponsored survey reported barely breaking even with their investments in AI capabilities. Despite the fact that 80% of executives said they believed AI will fundamentally change their business, only 6% had AI initiatives scaled across the enterprise.

Organizations are relying on existing talent and processes more oriented to software development than to the dynamic nature of AI. Many may underestimate the effort and investment they need in order to see returns, a piece in the Harvard Business Review reads. And many organizations may lack the governance structures to monitor AI effectively.

Still, Husain believes that SparkCognition is primed to make a change once the proceeds from the series D are put toward planned marketing, sales, and R&D efforts. To his point, the broader AI market shows signs of accelerating, not slowing, with 39% of large companies planning to invest in AI services as of 2020.

Verified Market Research predicts that the global market for AI will reach $641.30 billion by 2028.

Were seeing quite a few challenges facing our customers across industry. These include aging and failing assets, climate change and net-zero initiatives, emerging cyberthreats to IT infrastructure, an aging workforce and consequential skill gaps, and data overload, Husain added. To address these challenges, we encourage businesses across all of our key industries to invest in AI solutions that allow customers to gather insights to prevent unexpected downtime, maximize asset performance, and ensure worker safety, all while avoiding zero-day cyberattacks on essential IT infrastructure.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More

Read more:

SparkCognition, which develops AI solutions for a range of industries, nabs $123M - VentureBeat

Posted in Ai | Comments Off on SparkCognition, which develops AI solutions for a range of industries, nabs $123M – VentureBeat

Barn Owl Blazing New Trails in Agriculture with AI Technology – Colorado Office of Economic Development and International Trade

Posted: at 11:53 pm

Colorado is a trailblazer in many senses of the word. Besides the obvious kind of trail Coloradans hike up or ski down, the state creates many opportunities for innovation, especially for ideas that might positively impact our world or make lives easier.

The Advanced Industries Accelerator (AIA) Program was created in 2013 to promote growth and sustainability in Colorados advanced industries by driving innovation, accelerating commercialization, encouraging public-private partnerships, increasing access to early-stage capital, and creating a strong ecosystem that increases the state's global competitiveness. The AIA program operates under the Global Business Development (GBD) division of the Office of Economic Development and International Trade and since the programs inception, 194 projects have been funded, 3600 jobs created, 2800 jobs retained and $1.9 million gained in third party capital.

Specifically, one grant offered by AIA is the Advanced Industries Early Stage Capital and Retention Grant. This grant helps Colorado-based advanced industries technology businesses develop and commercialize technologies that will be created or manufactured in the state. Advanced technology must be innovative or disruptive, something that is different from currently available technology in the industry, and it must impact at least one of the following fields:

The current cycles application is open until March 3, however interested parties have two opportunities to apply each year - January-March and July-September.Learn more about the Advanced Industries Accelerator Programs here.

When siblings Sarah and Jaron Hinkley moved back to their hometown of La Junta, CO, they were not sure what to do with their professional experiences in the food industry and drone technology services, respectively. Sarah and her husband Bryan considered starting a restaurant until Jaron came to them with an idea to use his drone technology expertise to help local farmers. The trio quickly realized that drone technology could greatly impact the agricultural industry, especially as farmers face decline in labor and resources. By creating Barn Owl Drone Services, Sarah, Jaron and Bryan can offer table to farm services (instead of Sarah and Bryans original plan of farm to table dining) by sitting down with farmers to learn about what they need, what challenges they face and determine how to provide them the technology they need to make their work more efficient.

Barn Owl was awarded $200,000 from the Advanced Industries Early Stage Capital and Retention Grant grant in 2021. This award, along with other funding from investors and matching grants, has allowed Barn Owl to grow its reach from the potato farms in San Luis Valley to corn fields in Olathe and provide valuable services to farmers across Colorado. These services include crop analysis, mapping, monitoring and testing; photography and videography for marketing needs; volume metric measurements and heavy machinery inspections. Perhaps most unique is Barn Owls fleet of robotic weeding units, which cut weeds that interfere with crops. This allows farmers to cut down on labor costs and avoid using chemicals or pulling weeds which disturbs the soil. Barn Owl will also hire and train robotic operators to manage the drones, creating employment opportunities for highly skilled workers in rural areas.

Ultimately, Barn Owls goal is to improve the lives of farmers, their families and the food we all eat. Barn Owl is grateful for the opportunity to be supported by this grant and by the State of Colorado, which Sarah sees as setting a standard for other states to invest in agriculture and business. As a new mom and running a woman founded and owned business, she knows it takes a community to not only raise a baby, but also a business.

View original post here:

Barn Owl Blazing New Trails in Agriculture with AI Technology - Colorado Office of Economic Development and International Trade

Posted in Ai | Comments Off on Barn Owl Blazing New Trails in Agriculture with AI Technology – Colorado Office of Economic Development and International Trade

Korean Air Predicts Diversions On The Worlds Busiest Route With AI – Simple Flying

Posted: at 11:53 pm

Earlier this month, Simple Flying reported about the prospects of Korean Airs jump to AWS Cloud. There are considerable privacy, efficiency, and security benefits to be had with these technological progressions. These advantages extend across the operational spectrum, including on the worlds busiest route.

From ticket fares and route scheduling to food waste monitoring and fuel usage, AI and machine learning is overhauling internal and external airline operations. Korean Air highlights that it is able to look at every action, policy, and procedure from all departments with cloud computing. Subsequently, each department leader is able to talk the language and create products and services that cater to customer needs.

One service that is utilizing AI and machine learning in the cloud environment is Korean Airs route between Gimpo International and Jeju International. Last year, the were at least 85,880 flights between the two airports. This figure equates to approximately 235 flights a day.

The volcanic island of Jeju is a tourism hotspot, allowing it to earn the nickname of South Koreas Hawaii. Notably, the area houses 660,000 residents but as many as 15 million travelers from across South Korea and beyond flock to the land to enjoy the warm weather and white sand beaches.

The islands positioning regularly sees it experience sensitive and disruptive weather conditions. Typhoons are a regular occurrence, especially in the summer months.

Stay informed:Sign upfor ourdailyandweeklyaviation news digests.

So, despite being such a popular destination, there are plenty of risks involved. Therefore, carriers have to ensure that they are always effectively planning ahead. This process has essentially been manual for the most part. However, Korean Air highlights that modern technology has been a saving grace on its services to Jeju.

Previously, we used to use Excel spreadsheets to understand if there would be a diversion. Gimpo-Jeju is the busiest route in the world and we had a lot of diversions because of the weather in Jeju. It is an island, and wind comes from every different angle, depending on the climate changes, Korean Air CMO & CIO, Kenneth Chang told Simple Flying.

Before, our control tower used to manually try to guess if theres going to be a diversion on a specific day, specific hour, or specific flight. We are now able to leverage the cloud infrastructure and the AI/ML technology we put our data through, and it spits out recommendations. What we found out was that it was close to 90% accuracy,

With this approach, Korean Air is now able to shift manual processes away from the experts and give them the freedom to work on other crucial aspects. Machine learning and AI provide the results to allow the experts to make the final decision.

The data allows the airline to make better-informed decisions, leading to fewer surprises and a reduction in last-minute disruptions to passengers. Ultimately, cloud technology has significantly shaken up internal operations, and, in turn, had a progressive impact on the passenger side.

What are your thoughts about how Korean Air is able to utilize its cloud services across its operations? What do you think of the carriers overall deployment of modern technology? Let us know what you think of the airline and its initiatives in the comment section.

Follow this link:

Korean Air Predicts Diversions On The Worlds Busiest Route With AI - Simple Flying

Posted in Ai | Comments Off on Korean Air Predicts Diversions On The Worlds Busiest Route With AI – Simple Flying

Watch an AI Play the Best Game of Tetris You’ve Ever Seen – Gizmodo

Posted: at 11:53 pm

Is there a more satisfying experience in video gaming than clearing four lines at once in Tetris? (A move technically referred to as a tetris.) It turns out there is: watching an AI developed by Greg Cannon play Tetris flawlessly while prioritizing clearing four lines as frequently as possible.

Like human players, Cannons impressive StackRabbit AI gets better at playing Tetris through repeatedly playing and analyzing the game to develop improved strategies. But unlike human players, StackRabbit has nerves of steel and doesnt start to panic as the ever-growing stack of tetrominoes approaches the top of the play board, which it pairs with lightning-quick reflexes to play one of the most mesmerizing and impressive rounds of Tetris youve probably ever seen.

Clearing four lines at once is not only satisfying, its also the best way to quickly rack up points when playing Tetris. But while, theoretically, a talented player could stack tetrominoes indefinitely, the 8-bit NES version of the game (which is used for The Classic Tetris World Championships) starts to melt down as gameplay approaches level 29 where the games speed doubles. The developers assumed this was the point where human players wouldnt be able to keep up, and while some have managed to make it just past level 29, the game starts to quickly exhibit graphical glitches as the load on the NESs processor increases.

Human players have managed to hit NES Tetris high scores of over 1.6 million points, but with artificial human limits removed, Cannons StackRabbit AI managed to reach level 237 of the game with a score of 102,252,920 points after around an hour and five minutes of gameplay. Watching the AIs unbelievable run is often as confusing as it is mesmerizing as in later levels the game starts using the wrong graphical elements to build the tetromino pieces.

The glitches dont phase the AI, however, which works by pre-planning for whatever random piece appears next. It prioritizes clearing four lines at a time, but due to the randomness of the falling pieces, and the occasional scarcity of straight tetrominoes needed to complete a four-line tetris, the AI does occasionally have to clear single lines, and once in a while ends up with a gap in the stack of pieces, but watching it quickly recover is equally amazing.

Continued here:

Watch an AI Play the Best Game of Tetris You've Ever Seen - Gizmodo

Posted in Ai | Comments Off on Watch an AI Play the Best Game of Tetris You’ve Ever Seen – Gizmodo

How AR, Computer Vision And AI Coalesce For Smart City Cleaning – Forbes

Posted: at 11:53 pm

Computer vision in smart cities

Rather unsurprisingly, urban jungles generate much more waste than towns and villages. As smart cities are on the extreme end of the urbanization spectrum, the waste generated in such places is expectedly huge. Generally speaking, global waste is expected to increase by about 3.40 billion tonnes by 2050. If not managed well, this accumulated waste can have disastrous implications for public health and the environment. Smart cities have the technological means with which waste management can be simplified and made more effective. Various technologies, such as AR, AI and computer vision in smart cities, are used to make such zones clean and sustainable. These technologies assist public waste management agencies in smart cities in a variety of ways.

The main reason for improving cleanliness in smart cities is to prevent public health emergencies. Considering that, water management should be one of the biggest priorities for smart city governance bodies. Water management issues such as contamination, leakages and distribution-related problems cause problems in healthcare and other vital sectors such as manufacturing. Authorities tasked with carrying out urban cleaning can use AI and computer vision in smart cities to constantly monitor water quality and reduce leakages as they can create several bacteria-ridden puddles in smart cities.

In combination with computer vision and IoT-based purity and turbidity sensors, machine learning can be employed to accurately detect contamination levels in the water. Such tools also come in handy to trace water flow, which is useful for detecting the filthy areas in complex pipeline networks. Based on the data captured by IoT sensors, AI-based tools can determine factors such as the Total Dissolved Solids (TDS) levels and PH of water that is being processed for distribution. Such tools categorize water bodies based on such parameters. The training of AI models for such tools involves the analysis of thousands of datasets to predict the quality of a given water sample.

As stated above, water leakages can cause hygiene-related problems in smart cities. Water leakage and wastage are detrimental to domestic and industrial cleaning purposes. Additionally, water shortage and leakage result in problems in sludge dewatering and agriculture. To address such problems, smart cities use computer vision-based intelligent cameras and sensors near pools, tanks, reservoirs to raise leakage or loss alerts. AI-based leakage detection systems can use sound sensors to detect leaks in pipeline networks. Such systems detect leaks by assessing the sounds in water pipes.

As you can see, AI and computer vision in smart cities play significant roles in autonomously managing water distribution, monitoring purity levels and preventing wastage.

Most smart cities strive to be a part of an ideal circular economy where every product is 100% recyclable. A circular economy, although difficult to achieve, is one of the ways in which such zones of industrialization can be environmentally sustainable. Garbage detection and classification are vital for the processing and recycling of waste. Garbage that can be identified and classified can be recycled much more effectively. The use of computer vision in smart cities makes garbage detection and classification autonomous. In the long run, the use of computer vision in smart cities makes it possible to limit their contribution to inevitable climate change.

Recycling is a long and complicated process. The first step of getting garbage recycling right involves optimizing waste processing. Waste treatment facilities in smart cities segregate recyclable waste from the rest on the basis of their capacity to be processed and reused. In such processes, achieving 100% accuracy is challenging if only manual labor or standard automation tools are used. Such basic tools cannot carry out visual processing and analysis of waste materials to deduce recyclability on the basis of composition and other characteristics. Recycling rates can be improved by using AI and computer vision in smart cities for monitoring waste and assisting with waste segregation.

Computer vision-based tools can improve decision-making in such processes and eliminate any anomalies. AI and computer vision use algorithms and deep learning for the analysis of daily waste generated from various corners of a smart city. IoT sensors, once again, are used in multiple monitoring points in trash cans which enables AI and computer vision tools with determining aspects such as mass balance, purity and composition, among others. All in all, computer vision improves the process of garbage categorization by reducing the percentage of wasted recyclables before the actual recycling can take place.

Robotics, another AI-based application, has increasingly emerged in processes involving waste recycling in smart cities around the world. Specialized "recycling robots" are autonomously directed by the findings of computer vision-based garbage segregators. Robotized arms can pick and separate waste collected in smart cities in various containerswet recyclable waste, dry recyclable waste, toxic waste, among others. Apart from making the process of waste management autonomous, recycling robots are also highly relevant in the current pandemic age as they allow workers in waste management facilities to keep their distance from the garbage collected from different zonesand potentially infected patients homesin a smart city.

Robots used in waste processing use computer vision, color-coded cameras, laser sensors and metal detectors to classify the waste materials before directing them towards the different kinds of processing zonesrecycling, biodegradation and others. Robots with several arms and suction tools make segregation faster. Then, once the recyclables are separated from the other waste materials, robots make the process of processing autonomous. Generally, waste recycling involves steps such as heating and melting waste materials. Such processes involve several dangerous agents such as high temperature and pressure, volatile chemicals and others. Using autonomous robots allows waste recyclers to protect their workers from such agents. Robots can withstand the pressure, temperature and abrasive chemicals to facilitate the recycling process efficiently. This is also where AR enters the fray as the scientists tasked with recycling waste can use their mobile devices to monitor the recycling process and also remotely guide robots to carry out the operations precisely and without errors.

Recycling is a massive part of waste management, circular economy and environmental sustainability. Apart from segregating materials, robots can also be used for autonomous quality control during waste management.

Robotics, AI, IoT and computer vision in smart cities eliminate human error from waste classification, processing and recycling, making waste management better and enhancing the cleanliness aspect of smart cities as well as enabling smart cities to almost realize the ideal of a truly circular economy.

How AR, Computer Vision And AI Coalesce For Smart City Cleaning

Technologies such as AR and VR serve one of the main needs of smart city waste managementmaking training programs better and more realistic for newer workers once the old workforce eventually retires. AR-based tools allow workers to know their roles accurately. The experience of learning the different aspects of waste management is much improved when workers actually get to perform tasks in a make-believe simulation created by AR or VR-powered devices instead of relying on a rulebook, website or another standard training resource.

Additionally, as stated above, AR is a useful tool to control robotic cleaners. So, users can use their mobile devices to actively monitor the progress of such cleaners in smart cities. A combination of computer vision, AI and AR can automate several processes, such as garbage collection from individual housing societies in smart cities, enabling waste management officers to know in real-time the locations that are cleaned by cleaner robots and which ones are remaining. Based on this information, such robots can be managed dynamically by officials. AR creates different color zones for this purposered for dirty, green for cleanwhich makes it easier to differentiate between them. This particular feature of AR tools reduces the chances of certain places being cleaned twice or thrice, which is useful to optimize resource usage.

Certain surfaces and areas need to be cleaned with greater pressure and with more cleaning resources. Based on the color-coded information from AR applications, manual or robotic cleaners can use the necessary pressure or cleaning material to cleanse such areas and surfaces.

The combination of AR, AI and computer vision in smart cities has several other applications. Each technology brings something unique to the tableIoT captures information and actuates the tools that will process it, computer vision and AI evaluate the information and use pattern recognition and data classification to simplify waste management in smart cities and finally, AR makes cleaning and segregation monitoring much simple for the designated authorities tasked with smart city cleaning.

Originally posted here:

How AR, Computer Vision And AI Coalesce For Smart City Cleaning - Forbes

Posted in Ai | Comments Off on How AR, Computer Vision And AI Coalesce For Smart City Cleaning – Forbes

What Studying Consciousness Can Reveal about AI and the Metaverse (with Anil Seth) – Harvard Business Review

Posted: at 11:53 pm

AZEEM AZHAR: Welcome to the Exponential View podcast. Im your host, Azeem Azhar. Now every week I speak to the people who are shaping on their future. So far in this series weve had experts in everything from fusion and quantum computing to cryptocurrency and the future of the car. Now this weeks episode is a little different. Bear with us. It is just as mind expanding. My guest today is Anil Seth, a professor of cognitive and computational neuroscience at the University of Sussex. He is a friend of mine and the author of a recent book, Being You: A New Science of Consciousness. In it, Anil posits at what we think of as reality is a series of controlled hallucinations. We construct our version of the world according to our preconceptions and best guesses. Both the science and philosophy of consciousness is fascinating. And it has me for more than 30 years. Recent developments hint at a range of real world applications that could change the way we live. From the clinical uses to applications in virtual reality and artificial intelligence, the science of consciousness touches on so many exciting areas and no one is better placed to explain why than todays guest. Anil Seth, welcome to Exponential View.

ANIL SETH: Hi Azeem, its really great to be here. Im glad were able to talk now.

AZEEM AZHAR: And I am glad that you have summed up the energy to be here. And I think this is going to be the first time where I feel Ill be able to keep pace with you, only because you are slightly under the weather, but Im feeling perfectly fine. So, thank you for giving me that slight handicap advantage.

ANIL SETH: Lets see how that goes.

AZEEM AZHAR: Well, we met several years ago when there was a social media meme that went crazy. It was about a dress, whether it was black and blue, or white and gold. And we were both asked to go on television to talk about it. I had to talk about why Kim Kardashian was tweeting it. And you got to talk about why we perceive things the way we do. Which is really the heart of your work and your professional career over the past decades. Now, consciousness has occupied thinkers from millennia. We think about Descartes or Thomas Nagels paper. What is it like to be a bat? And in the 90s, there was a lot of work, a lot of emphasis on new ideas, perhaps relying on new instrumentation, like MRIs and other kind of experiments that we could use to understand what consciousness is, take us through your view and, and how you got that?

ANIL SETH: My approach to this question actually touches on a couple of the things you mentioned. First thing is youve got to start with a definition. What do we mean by consciousness? There are all sorts of definitions out there. But I mean, something very specific, very biological, very personal. It is any kind of subjective experience. And this is what the philosopher Tom Nagel said, of course. He said, For a conscious organism, there is something it is like to be that organism. It feels like something to be me. It feels like something to be you, right? But it doesnt necessarily feel that anything like it is to be a table or a chair or an iPhone. Theres what David Charm is called this hard problem. You have this world made of physical stuff, made of material, atoms or quacks or whatever it might be. And somehow out of this world of physical interactions, the magic of consciousness emerges or arises, and its called a hard problem because it seems almost impossible to solve, as if no explanation in terms of physical goings on could ever explain why it feels like anything to be a physical system. But we are existence proofs that it does. So instead of addressing that hard problem head on, my approach, and its not only my approach, it builds on a history of similar approaches. Is to accept that consciousness exists. And instead of trying to explain how its magic out of mere mechanism to break it up into its different parts and explain the properties of those different parts. And in that way, the idea or the hope is that this hard problem of consciousness, instead of being solved outright will be dissolved in much the same way that weve come to understand life, not through identifying the spark of life, but through explaining its properties as part of this overall big concept of what it is to be a living system.

AZEEM AZHAR: The hard problem that Charms talks about, I guess, back in the mid-nineties, perhaps when you were an undergraduate is a really, really tricky one, but even the easy problems of consciousness, how the mechanisms function were pretty difficult. But if your approach is neither the easy problem or tackling the hard problem, you call it, the real problem. Why do you say its the real problem?

ANIL SETH: Well, partly to wind up David Charm, I mean, hes been in a fantastic influence on the field, of course, but dividing the game between the hard problem and the easy problem. I think I forces people to ignore consciousness entirely. If you focus on the easy problems youre studying all the things that brains are capable of, that you can think about without needing to think about consciousness. These are challenging problems, but theyre not conceptually difficult in the same way that the hard problem is. And so if you divide it this way, youre either sweeping consciousness under the carpet, or you are facing this apparently unsolvable mystery. So, I call it the real problem. Simply to emphasize that yes, we have conscious experiences and importantly, consciousness is not one single big, scary mystery. It can be addressed from different angles. We can think about whats happening when you lose consciousness under anesthesia or sleep. We can think about perception. Why did some people see a golden white dress and why do other people see a blue and black dress? And then for me, the most interesting aspect is we can think about the self. Now, the self is not a sort of essence of you that sits somewhere inside the skull, doing or perceiving. The self is a kind of perceptual experience. Two, and it has many properties. Experience of being a body. The experience of free will, all these things are aspects of selfhood. And I think well make a lot more progress by addressing these aspects of consciousness somewhat separately. We can take the approach of trying to explain what makes them distinctive and get a lot further in understanding why our conscious experiences are the way they are. And as we do that, whats happening, certainly for me, is that this hard problem seems to lose its luster of mystery a bit. Were doing what science always does, which is were able to explain, predict and control the properties of a system. And theres no reason we cant do that when it comes to consciousness. Thats the real problem of consciousness.

AZEEM AZHAR: One of the things that we could do, I mean, this comes back from our own experience. It comes back from the Nagel paper, is that we can recognize that there is this quality to be a thing, and to have that sense of self and this sense that we have of consciousness. But lets take a step back. If we know that consciousness is there, why do we have it?

ANIL SETH: I dont think there needs to be any single reason why consciousness is part of the universe. We dont know why it extends either. But for all creatures that are conscious, I think theres a good hint about function. When we think about what we can call the phenomenology of consciousness, what our experiences are actually like. And if you think about your conscious experience at any particular time, it brings together a vast amount of information about the world in a way thats not reflecting the world as it is, but is reflecting the world in a way thats useful to guide your behavior. You experience all of these things in a unified scene together with the experience of being a self, together with the experience of emotion, things feel good or bad, and with the opportunities that you have to act in that way world. So theres this incredibly useful unified format for conscious experiences that provide a very efficient way for the organism to guide its decision making, its behavior in ways that are best suited basically to keeping the organism alive over time. And actually thats how I ground my whole ideas about consciousness. Theyre fundamentally rooted in this basic biological imperative to stay alive.

AZEEM AZHAR: So that is evolution all the way down. And we have evolved this capability, because it helps us make sense of all of our experiences, all the stimulation that we get in the external world and put it into ourselves so that we can experience that in ways that allow us to survive and allow us to potentially thrive, take the right kind of actions, that sort of thing.

ANIL SETH: Thats right. But also its worth emphasizing that the self is not the recipient of all these experience. The self is part of that experience. Its all part of the same thing. And this is one of the more difficult intuitions to wrap ones head around. And I think when thinking about consciousness, I always use this heuristic. I always remind myself that how things seem is not necessarily how they are. So it seems as though we are perceiving the world as it really is. That colors, like the color of the dress, exist objectively out there in the world. Now stuff does exist out there in the world, but the way we experience it, especially for something like color, depends on the mind and the brain too. And it seems as though the self is the thing thats receiving all these perceptions, but that again is not how things are. The self is also a kind of perception. And the fact that its all integrated into a unified conscious experience where we experience the self in relation to the world, that I think points to the function of consciousness, that its useful to guide the behavior of the organism.

AZEEM AZHAR: This key idea, you have this sentence, the purpose of perception is to guide action and behavior to promote the organisms prospect of survival. We perceive the world not as it is, but as it is used useful for us. So this is the rationale for why consciousness exists. And you then connect it to the notion of it being a controlled hallucination, capturing the idea that in a way consciousness is directing what we I hesitate to use the word choice, but what we choose to access from the real physical world, this mechanism of controlled hallucination.

ANIL SETH: Its a bit of a tricky term to think about perceptual experience. Because theres a lot of baggage to things like hallucination. The reason I use controlled hallucination to describe perceptual experience is to emphasize that all of our experiences are generated from within. And we dont just receive the world through the transparent windows of the senses. What we perceive is the brain making a best guess and inference about the causes of its sensory signals. And the sensory signals that come into the eyes and the ears and all the senses, theyre not just read out by the self inside the brain. No. The sensory signals are there to calibrate these perceptual predictions, to update these perceptual predictions. Again, according to criteria of utility, not necessarily according to criteria of accuracy. So the control is just as important as the hallucination here. Im not saying that our perceptions are all arbitrary or that the mind makes up reality. No. Experiences are always constructed, but theyre tied in very, very important. And as weve just said, evolutionarily sculpted ways. So that the way we experience the world is in general useful for the organism. So what we might think of hallucination colloquially, when, like I see some, I just have a visual experience that nobody else does, and theres nothing in the world that relates to it. You can think of that as an uncontrolled perception, when this process of brain-based best guessing becomes untethered from causes in the world.

AZEEM AZHAR: There are a few words that you used in your last answer that you talked about, inference and prediction and utility. And these are all words that we might use when were talking about artificial intelligence. So thank you for putting those words out there. Because when we talk about AI with you later in this discussion, we will come back to some of them. But lets go back to this notion of consciousness having this purpose. It helps organisms prospects for survival, that there is this notion of a kind of controlled hallucination, given all of these signals that are coming toward us. Now, for this to be a scientific theory, we have to be able to test it. We have to be able to run experiments on aspects of these assertions. So once youve made an assertion like that, what are the kind of experiments that you can run now to demonstrate parts of this theory?

ANIL SETH: This is a really good question. Because, of course, theories need to be testable in order to have any traction and to have a future. The idea of the brain as a prediction machine does have a long history. And you can take that idea and you can generate a lot of testable hypotheses about it. For instance, a whole range of work. Some of it from my own lab, others from other labs asks how our perceptual experience changes based on the expectations that our brain explicitly or implicitly has? If this controlled hallucination view is right then perceptual content should be determined, not by the sensory signals, but by the brains top down predictions. So we can test this in the lab and what we would call psychophysical experiments, where we carefully control the stimuli people are exposed to.

AZEEM AZHAR: You, sort of prime someone in advance, right? With a cue, they might interpret their experience one way, and youve primed them a different way theyll interpret it in a different way.

ANIL SETH: Right. This is a very blunt, a very blunt, very simplistic way to get at this. You can, for instance, tell people that seeing a face is more likely than seeing a house. And then you give them a situation which experimentally weve set up, so that theres an ambiguous image. And theyre more likely to see what they expect than what they dont expect, or theyll see what they expect more accurately and more quickly than what they dont expect to see. So thats a very simple kind of prediction that you make. It by no means validates or proves this whole theory. We need to do brain imaging studies as well. And these are beginning to happen in our lab, in other labs across the world, too, where we find that indeed we can read out what people are perceiving by looking at these top down flows of information in the brain. Certainly, in vision.

AZEEM AZHAR: Are these experiments that weve seen recently where you put someone into an MRI machine thats looking at their brain and you get them to think about a dog. And then you are able to look at the output and have a system that predicts that they were looking at a dog and recreate what they are thinking about. Is it that sort of thing that were talking about here?

ANIL SETH: Its based on the same sort of idea. So thats this emerging technology of brain reading, right? Can you decode what someone is looking at or thinking simply by basically chucking a load of brain imaging data out to machine learning classification algorithm. And you can. And, theres a lot of debate in the field about, is this telling us anything about the brain or is it just telling us that machine learning classification algorithms are quite good? But if you do this in a way thats more constrained to the anatomy of the brain. And for instance, you show people an image and then a quadrant of it might be missing, but it turns out a machine learning algorithm can still decode the content of the image. From brain imaging data from part of the visual cortex, where there was no stimulation and indeed from a layer of that visual cortex that receives top down input. And so the fact that you can do that is telling you theres information in this top down signaling that at least partly determines, or is relevant to the content of what someone is at experiencing. So experiments that build on this kind of approach are helping us disentangle, not just which regions are implicated in perception. I mean, neuro imaging has this history and starting point of focusing on, is this region a hotspot? Does it light up? Does this region light up? And I think these days we are moving beyond that, to think about networks and mechanisms and processes, rather than just this area or that area.

AZEEM AZHAR: There is a relationship between scientific theory and the tools that we have to run an experiment. And sometimes the two get somewhat out of sync. I think one of my favorite examples is when Einstein came up with the general theory of relativity in 1916, and he had these ideas of gravitational waves. It took us a century, until the LIGO device was available to actually experimentally prove that theory. When you look at the progress in your field and the types of experiments that have happened, certainly over the last 20 years, do you think that youve got the science of consciousness on a path that is more in sync with the tools that we have to do the tests, or is this going to end up being a little bit like general relativity where we have to sort of rely on it and then wait a hundred years before we can prove it?

ANIL SETH: Now, neuroscience, and especially the neuroscience of consciousness faces three specific challenges. One is brain imaging. We dont yet have a single brain imaging technology that is able to record both in high time resolution, in high spacial resolution at is from many, many different small parts of the brain at once, and coverage. So we can get any two out of three maybe, or one out of three, but we cant visualize the activity of a brain in the detail that we would ideally have. Thats one challenge. So developing new technologies that can manage that, I think is not necessarily going to be critical, but would certainly be helpful. The second challenge is specific to consciousness. And that is that the data by which we test theories of consciousness are of a different kind. Theres subjective data. Its not some data that we can get from LIGO or the James Webb telescope and or agree about it. Its subjective data. Now, some people say this means you cant do a science of consciousness at all, because you are dealing with data that is intrinsically private and subjective. I dont think thats quite true. I think it just adds a layer of difficulty. Theres a whole tradition and philosophy called phenomenology, which is about how to describe, how to report whats actually happening in the space of Contra experience. And there are methods now in psychology and in psychophysics where we can try to remove various biases in how people report what they experienced. So, it adds complication, but its not a deal breaker. The third thing, and this is something thats actually going on now, is that theres a movement to come up with experiments that disambiguate between competing theories of consciousness. Over the last 10 or 15 years in consciousness science, there have been a number of different theories refined and proposed this idea of the prediction machine. But then there are other ideas too, that consciousness is to do with integrated information in the brain or that its to do with broadcasted of information around the brain. And the challenge is to come up with experiments that distinguish between these theories, rather than just trying to be aligned with any particular one. And these experiments are now beginning to happen, which I think is very promising to the field.

AZEEM AZHAR: I then start to think about what the real world applications of all of this might be and what it might be telling us in practice. I think of perhaps a roughly sort of three areas I think about whats happening within medicine and within neurological and psychological conditions. I think about whats happening within artificial intelligence and the sort of work thats happening there. And, also whats happening in this the field of virtual reality. Because, I can see that virtual reality presents us with a whole set of sensory experiences that we may want to have sort of controlled hallucinations around. So Id love to explore those three areas. Perhaps starting with that first one, which is thinking about medical applications. I mean, what are we learning about psychiatric conditions or psychological conditions or neurological ones that is being illuminated by this kind of work?

ANIL SETH: If you take an example from neurology, people who suffer severe brain trauma often go into a coma where they unambiguously lose consciousness, and then they may recover partially to something called the persistent vegetative states. And this is a state when you diagnose it from the outside as a neurologist, the state in which the patients go through sleep, wake cycles, but there really doesnt seem to be anyone at home. Now theres no voluntary action. Theres no response to command or the questions. It seems like no consciousness is there. And people are often treated that way. That becomes a diagnosis of sort of wakefulness without awareness. But what the science of consciousness is allowing clinicians to do now is to not just rely on external science of consciousness, but look inside the brain. And theres a great example of this. Its now about 10 years old, but its a way of measuring the complexity of brain activity, by basically disturbing the brain with a very strong, very brief electromagnetic pulse. And then listening to the echo, listening to how this pulse bounces around the circuits of the brain. And this measure turns out to be quite a good approximate measure of how conscious somebody is, and has been validated under anesthesia and in sleep and so on.

AZEEM AZHAR: So its like a consciousness meter.

ANIL SETH: Its like the start of a consciousness meter. And I wouldnt want to make that analogy too tight. Because I dont think consciousness does lie along a single dimension, but I think in these clinical cases, it can be useful approximated that way. And indeed it is being in certain clinics now. If you take this measure, this consciousness meter measure, call it the perturbation complexity index, it will was developed by Marcello Massimini and Giulio Tononi and colleagues. That gives quite a good indication of whether somebody is in fact conscious, even though they cant express it outwardly, or will recover at least some conscious awareness.

ANIL SETH: Because if you track the trajectory of patients over time, youll find people that score high on this perturbation complexity index tend to be the ones that do better over time. And this is a direct clinical application of focusing on the brain basis of consciousness. And accompany that, theres of course theres many applications in psychiatry too. Because the primary symptom of most psychiatric conditions is disturbance in experience. The world seems different. People have actual hallucinations. People experience their body in different ways. People have delusional beliefs. And so now theres this whole field of computational psychiatry, which is trying to understand the mechanisms that give rise to the symptoms that appear at the level of conscious experience. Because, once we understand the mechanisms, we can start to think about really targeted interventions and bring psychiatry up into the 21st century, where it should be for medicine these days.

AZEEM AZHAR: Is consciousness to be found in a single place in the brain, or is it emergent? I mean, do we know what the minimal physiological requirements for consciousness are?

ANIL SETH: Certainly consciousness is not generated in any single area. Theres no seat of the soul, whether its the pineal gland that Day Cat identified or anywhere else. Consciousness emerges in some way from activity patterns that span multiple areas of the brain. But do we know the minimal neural correlate for conscious experience in a human brain? The answer still know, but there are some that argue that a very basic form of consciousness can emerge just from the brain stem, that it doesnt require any cortex at all. Thats sort of one extreme. And I dont think theres strong evidence for that. Then theres a very lively debate in the field at the moment about whether consciousness depends more on the front of the brain or on the back of the brain. Different theories might predict different involvement of the frontal parts of the brain. Some theories say that its absolutely essential. Other theories say its not. And so by designing experiments that can test the contribution of frontal parts of the brain. We can begin to distinguish between different theories too.

AZEEM AZHAR: Now Im interested in interaction between consciousness and machines as well. I go back one of the ways in which you describe consciousness. You say, The purpose of perception is to guide action and behavior to promote an organisms prospect of survival. It reminds me of the definition of intelligence that is often used in the artificial intelligence field, within computer science. Where people say an agent is said to be intelligent, if it can perceive its environment and act rationally to achieve its goals. So there seem to be a parallel from these different disciplines about the definition that you use about consciousness and the definition that some artificial intelligence researchers use for intelligence. Theyre not really the same thing at all, but Im curious about those parallels.

ANIL SETH: Right? There are parallels, but I think there are also important distinctions just in the specifics of the definite that you have. Theres a lot of work being done by the word rational, in that definition of intelligence from the AI community. But consciousness should not be defined that way. Consciousness, back to our very beginning, is any kind of subjective experience whatsoever. Instead of just being sad when something bad happens, we can be disappointed. We can experience regret. We can even regret things. We havent even done anticipatory regret. But to conflate consciousness and intelligence I think is to underestimate what consciousness really is about. And making this distinction, I think, has a lot of consequences. For one thing, it means at consciousness is not likely to just emerge as AI systems become smarter and smarter. Which they are doing. And theres a common assumption that theres this threshold. And it might be the threshold that people talk about as being general AI, when an AI acquires the functional abilities characteristic of a human, Oh, well thats when consciousness happens, thats when the light comes on for that AI system. And I just dont see any particular reason, apart from our human tendency to see ourselves at the center of everything on the top of every pyramid to think thats going to be true. I think we can have AI systems that do smart things that need not be conscious in order to do them.

AZEEM AZHAR: You call this idea of pernicious anthropocentrism. The idea that we have to be at the center of all of these. But when we think about what happens with engineered machines, as opposed to biological organisms, why are we saying this particular set of qualities that we call consciousness is present within sort of biological living organisms, but cant be present in engineered built ones.

ANIL SETH: I think theres just this big open question about whether consciousness depends on being made out of the particular kind of stuff. We are made out of carbon and neurons and wetware. Computers are made out of Silicon, mostly at least most modern day computers. Now, some people would say that it really doesnt matter what a system is made out of. It just matters what it does, how it transforms inputs into outputs. This may be true. It may be that consciousness is a sort of thing that if you simulate you instantiate. Like playing chess is like this. If you have a computer that plays chess, it actually plays chess. But then there are other things in the world for which functionalism is not true. And that the substrate, the what its made out of actually matters. Think about a really detailed simulation of the weather. Now this can be as detailed as you like, but it never actually gets wet or windy inside that simulation. Rain is not substrate independent. So, theres an open question. Here is consciousness dependent on our biology. Its very hard to come up with a convincing reason why it must be, but its equally hard to come up with a knock down argument that it has to be independent of that substrate. And thats why Im agnostic. But I do tend a little bit more towards the biological naturalism position. And thats primarily because when we think about a living creature, and we talk about the substrate, like what is the wetware that the mindware is running on? Well in the computer, youve generally got quite sharp distinction you can make the hardware and the software. But in a living organism, theres no sharp distinction between Mindware and wetware. And if you cant draw a line between these, then it almost becomes an unanswerable question about whether its independent of the substrate or not. Added to that, the only examples of things that we know are conscious are biological system. So that should be a kind of a default starting point until proven and otherwise.

AZEEM AZHAR: If we did get to a stage where, because you havent ruled this out, a computer became conscious, how could we know it was if it chose not to tell us?

ANIL SETH: This is a big problem. And bear in mind that being conscious just doesnt necessarily have with it, the ability to report. The system might not even be able to. Again, brain damage patients cant report things, even though they are conscious. I think the real danger in this area of artificial consciousness is that even though we dont know what it would take to build a conscious machine, we dont know what it wouldnt take. We dont know enough to rule it out. So it might in fact even happen by accident. And then indeed, how would we know. The only way to answer that question is to just discover more about the nature of consciousness in those examples that we know have it, that will allow us to make more informed judgements. I actually think a more short term danger is that we will develop systems that give the strong appearance of being conscious, even if we have no good reason to believe that they actually are. I mean, were almost all already there, right? We have combinations of things like language generation, algorithms, like GPT-3 or GPT-4, shortly and Deepfakes, which can animate virtual human expressions very, very convincingly. You couple these things together, and apart from the actual physical instantiation stuff, were already in a kind of pseudo Westworld environment where were interacting with agents.

AZEEM AZHAR: And youve also identified this challenge through some of your experiments of the idea of priming, that you can take something ambiguous and you can prime me, and I might hear the description of a lovely meal and someone else might hear the description of a political position. And so theres perhaps a vulnerability in the consciousness system that towards things that also look and walk and talk as if theyre conscious.

ANIL SETH: Absolutely. And I think this is something we need to keep very much front of mind as AI develops. Which is that, we have a lot of cognitive vulnerabilities, our cognitive vulnerabilities already being exploited by social media algorithms and the like. AI systems that give the appearance of being conscious will be able to exploit these vulnerabilities even more. So, theres a project Im working on with some colleagues in Canada, Joshua Benjio, and Blake Richards and others. Where what were trying to do is figure out how implementing some of the functions associated with consciousness can actually enhance AI, overcome some of its bottlenecks, like its ability to generalize quickly to novel situations, choose the data that it learns from, all these sorts of things, which we can do, and which are closely associated with consciousness in us. Without that having the goal of actually building a conscious machine, which want to adapt some of the functional benefits, but also do so in a way that we can help mitigate some of these dangers. For instance, an AI system that is actually able to recognize its own biases and correct for them might be a very useful change in of where AI is currently going.

AZEEM AZHAR: So, theres another technology theme that people are getting really excited about in 2022, which is the idea of the metaverse. And I guess that the metaverses, 2020s version of virtual reality. Creating environments that will be increasingly sensorially rich and immersive. To what extent would those appear to be real, real experiences to organisms that exhibit consciousness?

ANIL SETH: I have quite a problem with the overall objective of something like the metaverse. And its a very basic problem, which is that I think in the society, in which we live at the moment, we should be doing everything we can to reconnect ourselves with the world as it is. And with nature as it is, rather than trying to escape into some commercially driven virtual universe, however, glittering it might be. But I also think theres important lessons here or an important role that understanding consciousness has to play. When we experience a visual scene, were engaging with it all the time. We dont just passively experience a scene and sit there like a brain and a jar. Were interacting with it all the time and to understand how these interactions shape our experience. Now, these are the sorts of experiments, which VR is very useful. And of course the flip side of that is when we understand the role of interactions and shaping experiences, we can design VR environments to be more engaging, to be less frustrating, to perhaps be more useful, to the extent that they can be. And of course there are many very valuable applications as well. I just want to tell you this one experiment that weve been doing in the lab for a while, that I think is super interesting in this domain. Which is really what you said about will VR get to the point that its indistinguishable from real experience, setting aside whether we actually want to get there or not? Its an interesting question, right? And so, one of our experiments led by Keisuke Suzuki and Alberto Mariola is developing something. We call constitutional reality. This is the idea. Instead of using computer generated graphics, we use, in this case, real world video of lets say my lab. And we replay that real world video through a head mounted display so that as people look around, they can see the part of the room that they would see anyway. And in fact, thats what we do. We invite them in, they wear a headset, it has a camera on the front. And so to begin with, they are indeed experiencing their environment through the camera, projected into the headset, but then we can flip the feed, and run the prerecord video instead. And if you do it in the right way, people dont notice. So heres a situation. I think its really the first situation where people are fully convinced that what theyre experiencing is real in a way that you never get in standard VR or in a cinema. However, good the movie is. People really have the conviction what theyre experiencing is real, and yet it isnt. And this is a platform we can use to figure out, okay, now what happens if we mess with this movie in various ways? What happens to the persons perception when theyre high level prediction of whats going on? Is that this is indeed the real world. And thats a set of experiments that were working on right now.

AZEEM AZHAR: But that speaks to at sort of the potency or the potential potency of that set of technologies, that it could really deliver real experiences, right? Experiences that based on the idea of the controlled hallucination, the organism, the human is conscious of and believes they are experiencing and may make decisions based on those experiences.

ANIL SETH: Yeah, potentially. I mean, at the moment, this is obviously only possible in a very restricted circumstance. People have to come and sit in exactly the same place we recorded footage from and so on. But these are technological constraints. Theres not an in principle objection to extending that kind of technology. And theres another benefit of doing this. And this gets back to the first set of the applications. Which is that there are a range of psychiatric conditions, which are generally characterized, not by people having positive hallucinations, like seeing things that other people dont or hearing things. But rather reality seems drained of its quality of realness that their perceptions start to feel unreal. Their self can start to feel as if its not really there. And these kinds of conditions we might call them dissociative conditions. Are very, very tricky to deal with because they dont present with these obvious positive symptoms. And so this general line of research and thinking, what does it take for our brains to endow our perceptions with the quality of being real? Understanding that, I think, will refract back onto some of these applications in psychiatry as well. Where that quality of being real is attenuated or even abolished.

AZEEM AZHAR: I mean, Im curious about where this might go. I mean, science helps us get to settled understandings. It helped us get to a settle understanding of the relationship between the earth and the sun. It took Darwin to come along and then many years of arguing the discovery of DNA until we got a settled understanding about how new species come to earth and how they developed. When do you think science will come to a settled understanding of what consciousness is?

ANIL SETH: Oh, I hate that question so much. But, its an important question to ask. One of the strange things that I often hear when people talk about consciousness science and philosophy is that we still know nothing about how the brain generates consciousness or about how consciousness happens. Its still this complete mystery. But if I think back to what people were saying and thinking 20, 30 years ago, when I was just getting going, theres been a massive increase in understanding, not only of the brain networks that are involved, but also the kinds of questions that people ask. To just throw something very controversial and ride at the end. Theres this question about free will. Do we have it? Do we not have it? Does it matter? Yes, it matters because it influences all sorts of things like jury processes in law, when we hold people responsible and so on. But the questions are starting to change. Its not become a question of whether or not we have free will, but more a question of why do experiences of voluntary actions feel the way they do? How are they constructed and what role do they play in guiding our behavior? They become more sophisticated questions. And I think that is going to be part of the evolution of consciousness science, just as much as finding new answers. The questions will start to change, and well go Just like happened in the science of life. Well go beyond looking for the spark of life, the lan vital, and well come up with a richer picture of what consciousness actually is and what the right sorts of questions are to be asking about it. So the process of settling I think, is going to be quite slow. I dont think its going to be a mystery thats solved at any one Eureka moment. But the progress really is heartening. And I think the last thing Id say about it is that its very useful even to gain a partial understanding of consciousness. Thats useful for developing applications and technology and society and medicine. And fundamentally, its useful for us. Because, besides all these applications, I think most of us, at some point in our lives, certainly when we were kids, we were asking ourselves these questions. Who am I? What does it mean to be me? Why am I me and not you? What happens after I die? Understanding how experiences of the self and the world are constructed can help each of us understand our relationship with the rest of the world, with each other, and with nature much, much better at a deeper level. And I think that sufficient reward, and that reward is just going to keep on coming as we progress our understanding of the biology of consciousness.

AZEEM AZHAR: I know you cover many of these ideas in your new book, Being You, which is doing very well and is a great read. And of course, so much more to come. Thank you so much for your time today.

ANIL SETH: Thank you, Azeem. Its a real pleasure. Thanks for having me on. Ive really enjoyed the conversation.

AZEEM AZHAR: Well, thanks for listening to this podcast. If you want to learn more about the cutting edge of AI, enjoy a previous discussion I had with Nathan Benaich and Ian Hogarth, authors of The Annual State Of AI Report. And if you want to know more about how the science of consciousness and philosophy of mind interacts with virtual reality, watch this space. Weve got to great guest coming on to discuss what the metaverse might mean for us through the lens of consciousness. To become a premium subscriber of my weekly newsletter, go to http://www.exponential view.co/listener. Youll find a 20% off discount there. And stay in touch. Follow me on Twitter. Im @azeem, A-Z-E-E-M. This podcast was produced by Mischa Frankl-Duval, Fred Casella, and Marija Gavrilov. Bojan Sabioncello is the sound editor.

The rest is here:

What Studying Consciousness Can Reveal about AI and the Metaverse (with Anil Seth) - Harvard Business Review

Posted in Ai | Comments Off on What Studying Consciousness Can Reveal about AI and the Metaverse (with Anil Seth) – Harvard Business Review

Infermedica raises $30M to expand its AI-based medical guidance platform – TechCrunch

Posted: at 11:53 pm

Infermedica, a Poland-founded digital health company that offers AI-powered solutions for symptom analysis and patient triage, has raised $30 million in Series B funding. The round was led by One Peak and included participation from previous investors Karma Ventures, European Bank for Reconstruction and Development, Heal Capital and Inovo Venture Partners. The new capital means the startup has raised $45 million in total to date.

Founded in 2012, Infermedica aims to make it easier for doctors to pre-diagnose, triage and direct their patients to appropriate medical services. The companys mission is to make primary care more accessible and affordable by introducing automation into healthcare. Infermedica has created a B2B platform for health systems, payers and providers that automates patient triage, the intake process and follow-up after a visit. Since its launch, Infermedica is being used in more than 30 countries in 19 languages and has completed more than 10 million health checks.

The company offers a preliminary diagnosis symptom checker, an AI-driven software that supports call operators making timely triage recommendations and an application programming interface that allows users to build customized diagnostic solutions from scratch. Like a plethora of competitors, such as Ada Health and Babylon, Infermedica combines the expertise of physicians with its own algorithms to offer symptom triage and patient advice.

In terms of the new funding, Infermedica CEO Piotr Orzechowski told TechCrunch in an email that the investment will be used to further develop the companys Medical Guidance Platform and add new modules to cover the full primary care journey. Last year, Infermedicas team grew by 80% to 180 specialists, including physicians, data scientists and engineers. Orzechowski says Infermedica has an ambitious plan to nearly double its team in the next 12 months.

Image Credits: Infermedica

We will invest heavily into our people and our products, rolling out new modules of our platform as well as expanding our underlying AI capabilities in terms of disease coverage and accuracy, Orzechowski said. From the commercial perspective, our goal is to strengthen our position in the U.S. and DACH and we will focus the majority of our sales and marketing efforts there.

Regarding the future, Orzechowski said hes a firm believer that there will be fully automated self-care bots in 5-10 years that will be available 24/7 to help providers find solutions to low acuity health concerns, such as a cold or UTI.

According to WHO, by 2030 we might see a shortage of almost 10 million doctors, nurses and midwives globally, Orzechowski said. Having certain constraints on how fast we can train healthcare professionals, our long-term plan assumes that AI will become a core element of every modern healthcare system by navigating patients and automating mundane tasks, saving the precious time of clinical staff and supporting them with clinically accurate technology.

Infermedicas Series B round follows its $10 million Series A investment announced in August 2020. The round was led by the European Bank for Reconstruction and Development (EBRD) and digital health fund Heal Capital. Existing investors Karma Ventures, Inovo Venture Partners and Dreamit Ventures also participated in the round.

More:

Infermedica raises $30M to expand its AI-based medical guidance platform - TechCrunch

Posted in Ai | Comments Off on Infermedica raises $30M to expand its AI-based medical guidance platform – TechCrunch

An AI system that thinks fast and slow – TechTalks

Posted: January 24, 2022 at 10:35 am

This article is part of ourreviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

Despite seeing tremendous advances in the recent decade, artificial intelligence is still lacking sorely in basic areas such as generalizability, adaptability, and causality. Todays AI systemsmostly centered around machine learning and deep learningare limited to narrow applications, require large amounts of training data or experience, and are very sensitive to changes in their environments.

Researchers are looking to various fields of science to find solutions to the current limits of AI systems. A new concept, proposed by researchers from various organizations and universities, draws inspiration from the two-system thinking framework proposed by Nobel laureate psychologist and economist Daniel Kahneman. Introduced in a paper published online, the technique is called SlOw and Fast AI (SOFAI). SOFAI uses meta-cognition to arbitrate between different modes of inference to improve the efficiency of AI systems in using data and compute resources.

In his acclaimed book Thinking Fast and Slow, Kahneman proposes that the human mind has two systems of decision making. System 1 is fast, implicit, intuitive, and imprecise. It controls the unconscious decisions we make, such as walking or driving in a familiar neighborhood, climbing stairs, tying our shoelaces, and other tasks we can do without conscious thinking and oftentimes in parallel. System 2, on the other hand, is the slow and meticulous type of decision-making that requires logic, rational thinking, and concentration, such as solving complex mathematical equations, playing chess, or walking on a narrow ledge.

The human brain does a great job of dividing decision-making between the two modes of thinking. For example, when youre learning a new task, such as driving, your System 2 will be more engaged. Youll need to concentrate to coordinate your different muscles, shifting gears, pressing and releasing pedals, and turning the steering wheel, while at the same time watching the street and listening to the engine. As you gradually repeat the routines, you learn to perform the tasks without concentration and your brain shifts the task to your System 1. This is why an experienced driver can control the car and do something else at the same time, such as talking to the passengers, while a novice driver must concentrate fully on doing all the tasks right.

In mentally demanding tasks, such as calculus or chess, System 2 will remain the ultimate controller. But System 1 will also shoulder some of the burden over time. For example, experienced chess players who have played thousands of games use System 1 to recognize patterns of moves of formations on the chessboard. It wont give the player a perfect solution, but it will provide intuition on where the game is headed and help save the expensive System 2 crucial time when deciding the next move.

The division of labor between System 1 and System 2 is natures solution to creating a balance between speed and accuracy, learning and execution.

As the researchers note in their paper, System 1 is able to build models of the world that, although inaccurate and imprecise, can fill knowledge gaps through causal inference, allowing us to respond reasonably well to the many stimuli of our everyday life. When the problem is too complex for System 1, System 2 kicks in and solves it with access to additional computational resources, full attention, and sophisticated logical reasoning.

Most AI systems use a single architecture to solve problems. For example, machine learning engineers will design a deep neural network to perform a single task and train it until it reaches the desired level of accuracy. Classic deep learning architectures have distinct limitations that have been amply documented in recent years. Among them is the need for large amounts of training data and computational resources. For example, a deep reinforcement learning system that mastered the videogame Dota 2 required thousands of years worth of training.

On the other hand, current AI systems are very sensitive to edge cases, situations that they havent encountered during training. For example, despite having been trained on millions of miles of simulation and real-world driving, autonomous vehicles sometimes make mistakes that most average drivers would easily avoid.

Inspired from System 1 and 2, the SOFAI architecture uses multiple problem-solvers to address some of these limitations. SOFAI is composed of a pair of System 1 (S1) and System 2 (S2) models. The System 1 solver is very fast and automatically processes any new problem or input that SOFAI faces.

SOFAI has a meta-cognitive module (MC) that decides whether the System 1 solution is accurate and reliable enough or if it needs to activate the slower and more resource-intensive System 2 solver. Like the human mind, the system also has models of itself, others, and the world. As it accumulates experience, SOFAI updates these models, which helps it improve the confidence and reliability of fast decision-making with System 1.

The MC module arbitrates between the two systems by using the information it gains from the models and the solution provided by the S1 solver. Sometimes, the S1 solution might not be too accurate, but given time constraints, it might be a better option than spending additional resources on S2. In other cases, the expected gain from activating the S2 might not justify wasting extra resources, so the MC will opt to use the S1.

According to the researchers, This architecture and flow of tasks allows for minimizing time to action when there is no need for S2 processing since S1 solvers act in constant time. It also allows the MC agent to exploit the proposed action and confidence of S1 when deciding whether to activate S2, which leads to more informed and hopefully better decisions by the MC.

While the researchers present SOFAI as a concept, they also experimented with a real implementation of the system in a grid-navigation problem. The goal of the AI system was to generate a trajectory that went from the initial state to a goal state.

The environment had a reward for achieving the goal and penalties for each move. There are additional constraints such as extra penalties for squares with black, green, and blue color codes. Basically, the AI agent must find the shortest trajectory to the goal while avoiding states that result in penalties. The researchers added some randomness to the environment to prevent it from becoming deterministic.

The SOFAI agent was composed of a simple System 1 solver that chose the move with the highest expected reward. The S1 starts with no knowledge of the environment and gradually improves as it collects experience and the SOFAI updates its model of the world (the grid environment) and itself (trajectories and moves). The System 2 component is created based on the Multi-alternative Decision Field Theory (MDFT), an inference model that can reason between different choices. MDFT can provide better results than the untrained S1 solver, but it is slower and computationally more expensive. In their experiments, the researchers tested three variations of the MDFT, each tuned for different trajectory preferences.

At every step, the SOFAIs meta-cognition unit decides whether it can trust the S1s solution or if it needs to switch to the S2 solver.

The researchers tested the different variations of SOFAI against solo S1 and S2 (MDFT) agents. Their experiments show that when used alone, the S1 system generates poor reward, trajectory length, and timing results. The S2 solver generates good trajectories and rewards but is computationally expensive and has poor timing results. In contrast, SOFAI found the right balance between reward and efficiency.

They then aggregated the results over 1,000 trajectories to see how the SOFAI model evolves its behavior and balances the use of the S1 and S2 agents. The results show that as SOFAI goes through more and more trajectories, its timing decreases, which means it becomes more compute-efficient, and its evolving behavior is very similar to how the human mind distributes cognitive labor between System 1 and System 2.

In the beginning, the SOFAI mostly uses S2 because its S1 module does not have enough experience and its decisions are not trustable. As the S2 model goes through multiple trajectories, the SOFAI updates its environment and self models, which results in better decisions by the S1. Consequently, the MC module gradually starts to shift decisions to the faster S1 module instead of relying on the compute-intensive S2. After about 450 trajectories S1 is used more often than S2. This evolving behavior allows SOFAI to be faster without degrading the quality of the trajectories it generates.

This behavior is similar to what happens in humans we first tackle a non-familiar problem with our System 2, until we have enough experience that it becomes familiar and we pass to using System 1, the researchers write.

SOFAI is one of several directions of research that have been inspired by the System 1 and 2 thinking theory. In 2019, deep learning pioneer Yoshua Bengio discussed System 2 deep learning, an area of research that aims to improve neural networks toward developing symbolic reasoning capabilities. Other related efforts are being made in developing hybrid AI systems that combine neural networks and symbolic AI.

And there are notable efforts in self-supervised learning systems that can develop behavior without the need for large amounts of data. The intersection of self-supervised learning and reinforcement learning is particularly interesting as it aims to develop memory and data-efficient AI systems that can be applied to the real world.

Though SOFAI is not the only game in town, it looks promising. The researchers plan to expand on the idea and create SOFAI systems that have multiple S1 and S2 modules and can tackle several problems with the same architecture.

Continue reading here:

An AI system that thinks fast and slow - TechTalks

Posted in Ai | Comments Off on An AI system that thinks fast and slow – TechTalks

Page 71«..1020..70717273..8090..»