Artificial Intelligence and Cyber Security: modern concepts that ensure the safety of rail transportation – Logistics Middle East

The 2020 Railway Forum, organised by the Saudi Railway Company (SAR) under the patronage of the Custodian of the Two Holy Mosques King Salman bin Abdulaziz Al Saud, concluded its sessions today. Participants discussed railway security and safety, the future of digital railways and the concept of the integration of security and safety elements, and the role of artificial intelligence and "5G" technology in the sector.

Talal AlAnazi, director corporate HSE & industrial security at Maaden, spoke about security and safety in transportation, explaining that Maaden adopts the concept of improving industrial safety and security, given its importance in the work environment. He said: "We aspire to move from a culture of research and fix, in industrial safety and security, to a culture of expectation and prevention of risks before they occur." He referred to the application of Maadens "safety steps" initiative, which aims to reduce risks by replacing the culture of "reaction" with a culture of "proactivity" throughout all of the company's business.

Abdullah Al-Yousef, Engineering support services director at SAR, talked about the most important challenges facing the railway sector in the Kingdom. He explained that the immaturity and lack of culture in dealing with trains is among the most important of such challenges, drawing attention on the need to work in order to raise awareness about safety, whether for employees and workers in the sector, or the general public.

Al Yousef also highlighted the need to possess engineering and technical solutions, but at the same time deal intelligently with such technology, and employ it in the correct framework. He pointed out that SAR launched a series of awareness campaigns related to safety, in all villages, cities and areas close to the railways, in addition to engaging in social networking platforms to raise awareness, as well as sending text messages to citizens and residents. He said that many workshops and lectures have been held for employees and workers at SAR to raise awareness about the importance of safety and security in the sector.

SARs Engineering support services Director expressed the company's readiness to receive any technical ideas from forum participants, to support the security and safety of the railways.

For her part, Kai Taylor, from French group Thales, underlined the need to invest in youths creative energies in transferring technology and making great use of it. He pointed out that we will have bright days, if we predict the future, and develop the right technologies, and use them in various sectors, especially transport and communications. He also insisted on the need to have trains reach remote areas, and the use of 5G technology.

Matthias Schubert, executive vice president Mobility at TV Rheinland Group, said that the innovation sector is one of the important sectors that have made progress in the field of automation, pointing to the need to implement the important standards of security and safety in transportation, since it is impossible to sacrifice passenger safety.

For his part, Engineer Abdul Jabbar bin Salem, regional operations director Mena Infrastructure for the Middle East and North Africa at SNC Lavalin, talked about the importance of cyber security in the transport sector and railways, which has become more sophisticated and smarter according to him.

During the "Future of Railways" session, the Business Development Transport Solutions of at "Huawei" stressed the importance of using fifth-generation technology in railways, as it contributes to reducing costs and energy, while increasing efficiency. He added that "digital transformation is an opportunity, to provide a unified system of controls for safety and security".

Javier de la Cruz Garca Dihinx, CAF Rail Digital Services managing director, explained that digital transformation is an urgent and important issue in the transport and railway sector, through which systems can be integrated with each other.

And Andres De Leon, CEO of HyperLoopTT, spoke about hyperloop trains, explaining that they are fast, human-focused, sustainable, profitable, and have returns, indicating that through them Europe can be interlinked in a few hours.

Read more:
Artificial Intelligence and Cyber Security: modern concepts that ensure the safety of rail transportation - Logistics Middle East

Automation and AI sound similar, but may have vastly different impacts on the future of work – Brookings Institution

Last November, Brookings published a report on artificial intelligences impact on the workplace that immediately raised eyebrows. Many readers, journalists, and even experts were perplexed by the reports primary finding: that, for the most part, it is better-paid, better-educated white-collar workers who are most exposed to AIs potential economic disruption.

This conclusionby authors Mark Muro, Robert Maxim, and Jacob Whitonseemed to fly in the face of the popular understanding of technologys future effects on workers. For years, weve been hearing about how these advancements will force mainly blue-collar, lower-income workers out of jobs, as robotics and technology slowly consume those industries.

In an article about the November report, The Mercury News outlined this discrepancy: The study released Wednesday by the Brookings Institution seems to contradict findings from previous studiesincluding Brookings ownthat showed lower-skilled workers will be most affected by robots and automation, which can involve AI.

One of the previous studies that article refers to is likely Brookingss January 2019 report (also written by Muro, Maxim, and Whiton) titled Automation and Artificial Intelligence: How machines are affecting people and places. And indeed, in apparent contradiction of the AI report, the earlier study states, The impacts of automation in the coming decades will be variable across occupations, and will be visible especially among lower-wage, lower-education roles in occupations characterized by rote work.

So how do we square these two seemingly disparate conclusions? The key is in distinguishing artificial intelligence and automation, two similar-sounding concepts that nonetheless will have very different impacts on the future of work here in the U.S. and across the globe. Highlighting these distinctions is critical to understanding what types of workers are most vulnerable, and what we can do to help them.

The difference in how we define automation versus AI is important in how we judge their potential effects on the workplace.

Automation is a broad category describing an entire class of technologies rather than just one, hence much of the confusion surrounding its relationship to AI. Artificial intelligence can be a form of automation, as can robotics and softwarethree fields that the automation report focused on. Examples of the latter two forms could be machines that scurry across factory floors delivering parts and packages, or programs that automate administrative duties like accounting or payroll.

Automation substitutes human labor in tasks both physical and cognitiveespecially those that are predictable and routine. Think machine operators, food preparers, clerks, or delivery drivers. Activities that seem relatively secure, by contrast, include: the management and development of people; applying expertise to decisionmaking, planning and creative tasks; interfacing with people; and the performance of physical activities and operating machinery in unpredictable physical environments, the automation report specified.

In the more recent AI-specific report, the authors focused of the subset of AI known as machine learning, or using algorithms to find patterns in large quantities of data. Here, the technologys relevance to the workplace is less about tasks and more about intelligence. Instead of the routine, AI theoretically substitutes for more interpersonal duties such as human planning, problem-solving, or perception.

And what are some of the topline occupations exposed to AIs effects, according to Brookings research? Market research analysts and marketing specialists (planning and creative tasks, interfacing with people), sales managers (the management and development of people), and personal financial advisors (applying expertise to decisionmaking). The parallels between what automation likely wont affect and what AI likely will affect line up almost perfectly.

Machine learning is especially useful for prediction-based roles. Prediction under conditions of uncertaintyis a widespread and challenging aspect of many information-sector jobs in health, business, management, marketing, and education, wrote Muro, Maxim, and Whiton in a recent follow-up to their AI report. These predictive, mostly white-collar occupations seem especially poised for disruption by AI.

Some news outlets grasped this difference between the AI and the automation report. In The New York Timess Bits newsletter, Jamie Condliffe wrote: Previously, similar studies lumped together robotics and A.I. But when they are picked apart, it makes sense that A.I.which is about planning, perceiving and so onwould hit white-collar roles.

A very clear way to distinguish the impacts of the two concepts is to observe where Brookings Metro research anticipates those impacts will be greatest. The metros areas where automations potential is highest include blue-collar or service-sector-centric places such as Toledo, Ohio, Greensboro, N.C., Lakeland-Winter Haven, Fla. and Las Vegas.

The top AI-exposed metro area, by contrast, is the tech hub of San Jose, Calif., followed by other large cities such as Seattle and Salt Lake City. Places less exposed to AI, the report says, range from bigger, service-oriented metro areas such as El Paso, Texas, Las Vegas, and Daytona Beach, Fla., to smaller, leisure communities including Hilton Head and Myrtle Beach, S.C. and Ocean City, N.J.

AI will also likely have different impacts on different demographics than other forms of automation. In their report on the broader automation field, Muro, Maxim, and Whiton found that 47% of Latino or Hispanic workers are in jobs that couldin part or whollybe automated. American Indians had the next highest automation potential, at 45%, followed by Black workers (44%), white workers (40%), and Asian Americans (39%). Reverse that order, and youll come very close to the authors conclusion on AIs impact on worker demographics: Asian Americans have the highest potential exposure to AI disruption, followed by white, Latino or Hispanic, and Black workers.

For all of these differences, one important similarity does exist for both AI and broader automations impact on the workforce: uncertainty. Artificial intelligences real-world potential is clouded in ambiguity, and indeed, the AI report used the text of AI-based patents to attempt to foresee its usage in the workplace. The authors hypothesize that, far from taking over human work, AI may end up complementing labor in fields like medicine or law, possibly even creating new work and jobs as demand increases.

As new forms of automation emerge, it too could end up having any number of potential long-term impactsincluding, paradoxically, increasing demand and creating jobs. Machine substitution for labor improves productivity and quality and reduces the cost of goods and services, the authors write. This maythough not always, and not foreverhave the impact of increasing employment in these same sectors.

As policymakers draw up potential solutions to protect workers from technological disruption, its important to keep in mind the differences between advancements like AI and automation at largeand who, exactly, theyre poised to affect.

More here:
Automation and AI sound similar, but may have vastly different impacts on the future of work - Brookings Institution

Why are we so scared of Artificial Intelligence development? – Open Access Government

The proliferation of technology throughout society has been rapid. As with any kind of societal shift, it has led to the excitement for the future and concern around new risks; particularly when it comes to job security. Indeed, academic institutions such as the University of Oxford suggesting that 47% of the current job market will be made obsolete due to rapid advancements in tech over the next 25 years have done nothing to calm public nerves.

Such staggering numbers are inevitably eye-catching, but not always helpful. Indeed, they tend to result in black hole baseless panic. As German philosopher Immanuel Kant suggested, it is within human nature to project things to infinity, regardless of the evidence.

Thankfully, a growing body of evidence indicates the future relationship between tech and humanity will be prosperous and peaceful. Thats partly because humans have some innate qualities that will be difficult for machines to ever replicate, meaning we will always be needed for the proper operation of society.

This view is supported by the experts; the World Economic Forum has predicted that whilst 75 million jobs could be lost to tech between 2020 and 2025, a further 133 million new roles will be created. These will be in the maintenance, management and oversight of the tech jobs, many of which are yet to be created

Pedros Domingos, a professor at the University of Washington and expert on the future relations between man and machine, suggests that AI will lead to the creation of roles that we cannot yet understand. After all, if you asked a person in the 1980s to describe the role of an app developer, theyd be uncertain. Indeed, this is the case for those of us today trying to imagine the world of work in 2030 or 2050 there are too many variables for us to make an effective prediction. One thing is sure: therell be new jobs for people to fill.

However, the new jobs created wont just be about creating and maintaining the new technology this will partially make up the job creation, of course, although newly carved human roles in the future workforce will involve working with machines.

This refers to how AI will be able to augment and complement, rather than replace, what humans are already able to do. A fantastic example comes from Prospex. This tech, developed by Fountech Ventures, is an AI program that can generate leads for salespeople. So, rather than spending hours manually trawling through databases to find business leads, the technology rapidly compiles lists of people to contact. As such, salespeople can focus on their passion, amplify their skills and delegate all of the drudgery to a machine.

Another example arrives from Autodesk, who have developed an AI program called Dreamcatcher. It too has a simple proposition: generating new designs based on assigned parameters. So, if you were designing a new table, youd input code for legs, tabletop, etc., and it then generates a multitude of options based on the designers preferences. The latter is then able to choose or amend a design as needed. So, we see human creativity flourish, thanks to collaboration with AI technology.

The future collaboration between man and machine will also be characterised by a more obvious aspect: the automation of repetitive tasks. For the millions of people worldwide who still work in factories, this might be a worrying prospect. However, the reality is more complex and, indeed, positive.

In fact, it will likely entail more fulfilling roles and the example of chatbots illustrates this point clearly. They are able to access vast amounts of data and are usually positioned at the beginning of the customer contact experience. Here, they answer questions and direct customers to call centres, where a specialised, human response can be given if needed. For the customer, it means a more efficient service. And for workers, it means dealing with more interesting issues rather than asking what is your customer reference? all day certainly a win-win.

John E. Kelly III, executive vice president at IBM asserted that, collaboratively, humans and machines working together always beat or make a better decision than a man or a machine independently and I agree. As long as experts treat this new technology with care, there will be no limit to what man and machine can collectively achieve.

Editor's Recommended Articles

More:
Why are we so scared of Artificial Intelligence development? - Open Access Government

Preventing hospital readmissions with the help of artificial intelligence – University of Virginia The Cavalier Daily

The University Health Systems data science team recently advanced to the next stage of a nationwide competition to apply artificial intelligence to hospital readmissions, a persistent and costly issue. Sponsored by the Centers for Medicare and Medicaid Services, the inaugural Artificial Intelligence Health Outcomes Challenge initially received hundreds of applications. CMS chose only 25 submissions, the Universitys among them, to execute their proposed strategies.

A few years ago, in order to significantly reduce unplanned readmissions to the hospital, the University initiated efforts to develop a cutting-edge yet easily accessible solution to this widespread problem. Bommae Kim, senior data scientist for the University Health System, began pursuing remedies for the epidemic of readmissions in 2018.

Usually, after a patient was discharged, they couldnt manage their disease for some reason, so were trying to figure out what that reason is and help, Kim said.

The University Health Systems data science team found that three percent of patients at the University constitute 30 percent of readmissions within the first 30 days following release from the hospital, while the majority of the remaining 70 percent return within a year. After identifying the need to decrease such adverse events, data scientists in the Universitys Health System, such as Jason Adams, turned to artificial intelligence to target key factors that contribute to a patient returning unnecessarily to the hospital.

The purpose is to take this amount of information and in an automated way to tell that a person is at risk and what is the course of action that can best help that patient, Adams said.

Kim acts as project leader alongside a team of data scientists and information technology personnel. Overseen by Jon Michel, director of data science for the University Health System, the researchers produce models that help predict the likelihood of readmission and subsequently provide actionable advice for physicians.

Only a year or so later in 2019, CMS announced a competition to tackle the same challenge. CMS directed participants to employ the computing power of artificial intelligence to construct a model that accurately and efficiently flags patients at risk of returning to the hospital for non-routine treatments. More than 300 applicants submitted proposals during the launch stage of the challenge.

The University was one of only 25 groups selected to advance to the next stage, vying with organizations such as IBM, Deloitte and the Mayo Clinic for the $1 million grand prize and utilization by the CMS Innovation Center to determine payment and service delivery strategies.

Were doing this for our U.Va. patients, but it would be nice to win the competition because then we can deploy our approach at the national level, Kim said. We believe in our approach.

For this phase of the competition, CMS distributed Medicare claims data to the remaining teams. Claims from all across the country provide the opportunity to fine-tune the Universitys model with data outside of the University Health System. According to Application Systems Analyst Programmer Angela Saunders, the supplemental details will prove beneficial for the Universitys models.

Saunders did point out challenges with the millions of rows of data, which require extensive resources to simply store in an environment suitable for manipulation. Furthermore, inconsistencies lingered in the dataset from year to year, requiring the feature engineering team to sift and sort through the tables, standardizing entries and column headers, which detail the traits associated with each claimant.

Its not just a little data, Saunders said. We have exhausted a lot of resources just to get the data to consistency. Each year, things change just a little bit and so just getting it into a consistent format is a lot of the battle.

Based on the teams assessments, much of the feature engineering portion of the project at least the preliminary round of it has been completed. The next step involves transporting data to Rivanna, the Universitys high performance computing system, and fitting predictive models to the data. Data scientist Rupesh Silwal, who helps design and evaluate multiple iterations of the modeling architecture, noted the importance of not only systemizing the entries, but also of ensuring sensitive medical data remains anonymous.

The feature engineering team has cleaned the data, made sure everything makes sense from year to year and that all of the sensitive information is scrubbed so we can move the data to this other computing infrastructure, Silwal said. Part of our effort has been focused on getting the data in there and using it to set up a modeling environment to see if we can make predictions.

Specifics regarding modeling techniques and factors employed in creating the Universitys unique solution could not be revealed at this time, due to the proprietary nature of the ongoing competition. In broad terms, factors such as past utilization of certain hospital services like the Emergency Department or chronic conditions contribute to the initial formulation of the model, as they are indicators of high potential for readmission, data scientist Adis Ljubovic said.

Those are fairly well-known and were using that as the baseline, but we also have the secret sauce ones that are preventable, Ljubovic said.

Other variables intended to capture financial, transportation and lifestyle information for patients also augment the standard determinants of readmission, while electronic medical records from the University provide documentation of trends in the Universitys own health system.

Another distinctive aspect of the Universitys proposal is its commitment to a solution that clinicians accept. Senior data scientist John Ainsworth and Ljubovic, along with other members of the Universitys project, assert that the healthcare industry generally adopts a conservative mindset with regards to artificial intelligence modeling in hospitals. However, the University Health Systems data scientists have consulted with doctors at the University hospital about introducing tools physicians trust and can easily adopt.

Data science techniques bring with them the potential for accuracy, for bringing in and ingesting larger datasets, Ainsworth said. The richness of the data gets recorded and putting up the information in front of clinicians that can help them take meaningful action is what were going for. If we can ... give them some sense of where preventative strategies might lie, that can support them in their goal of caring for patients.

Several members of the team agreed a complex issue like hospital readmission calls for a collective approach. In the University Health Systems data science department, that can be a rare occurrence, several data scientists remarked, as their separate assignments often occupy most of their time. Senior Business Intelligence Developer Manikesh Iruku expressed appreciation for the chance to learn more about data science techniques, and others shared similar experiences when it came to exploring different subfields of data science.

Saunders and data scientist Valentina Baljak emphasized the confidence this collaboration has given the group to tackle new tasks.

Frequently for us, we have our own projects and its a one-person project, Baljak said. Occasionally you collaborate with someone, but I dont really think we had a project that involved all of us at the same time. That has been a great experience.

Currently, competitors are finalizing their project packages to submit to CMS in February. CMS plans to winnow the field down to the seven best proposals by April. Regardless of the outcome, the Universitys team plans to put their results and newly developed models into practice within the Universitys Health System.

In particular for healthcare, in some ways the best is yet to come in the data science world, Ainsworth said. The future is bright for data science in healthcare.

Read the original:
Preventing hospital readmissions with the help of artificial intelligence - University of Virginia The Cavalier Daily

You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place – Chemistry World

Janelle ShaneWildfire2019 | 272pp | 20ISBN9781472268990

Buy this book on Amazon.co.uk

How many giraffes can you see? Too many to count. This seems unlikely unless you are an artificial intelligence algorithm trained on a dataset in which giraffes were somewhat overrepresented.

As Janelle Shane explains in her book on how artificial intelligence works and why its making the world a weirder place, accidental over-inclusion of giraffes in image collections used to train AIs is exceedingly likely. The presence of a giraffe is so rare and exciting that you are almost certain to take a photo.

An optical physicist, Shane documents such eccentricities of AI on her blog and Twitter feed, which she has now expanded into this delightful book. The title comes from an experiment she ran to see if an AI could generate human-acceptable pick-up lines. The results were, as you can see, mixed.

This is the crux of Shanes highly compelling argument about AI: its danger doesnt come from exceeding intelligence human-like AI remains firmly within the realm of science fiction but from the very weird things that narrowly focused algorithms do. Like the self-driving car algorithm that identified a sideways-on lorry as a road sign, causing a fatal accident. As artificial intelligence becomes ever more deeply embedded in our modern digital lives, it behoves us all to understand it better and know its limitations and failings.

I really loved this book, and, if you like your serious science accompanied by very cute cartoon illustrations, you will too. Shanes explanations are not only laugh-out-loud hilarious but also so accessible. Reading the book moved me to go do my own experiments in AI weirdness, like playing with predictive text on my smartphone and chatting with virtual chat bots about giraffes. I feel much better informed as a result.

This book features in our book club podcast, which you can listen tohere.

See the original post here:
You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place - Chemistry World

Cambridge Science Festival examines the effects and ethics of artificial intelligence – Cambridge Network

Artificial intelligence features heavily as part of a series of events that cover new technologies at the 2020 Cambridge Science Festival (9 22 March), which is run by the University of Cambridge.

Hype around artificial intelligence, big data and machine learning has reached a fever pitch. Drones, driverless cars, films portraying robots that look and think like humans Today, intelligent machines are present in almost all walks of life.

During the first week of the Festival, several events look at how these new technologies are changing us and our world. In AI and society: the thinking machines (9 March). Dr Mateja Jamnik, Department of Computer Science and Technology, considers our future and asks: What exactly is AI? What are the main drivers for the amazing recent progress in AI? How is AI going to affect our lives? And could AI machines become smarter than us? She answers these questions from a scientific perspective and talks about building AI systems that capture some of our informal and intuitive human thinking. Dr Jamnik demonstrates a few applications of this work, presents some opportunities that it opens, and considers the ethical implications of building intelligent technology.

Artificial intelligence has also created a lot of buzz about the future of work. In From policing to fashion: how the use of artificial intelligence is shaping our work (10 March), Alentina Vardanyan, Cambridge Judge Business School, and Lauren Waardenburg, KIN Center for Digital Innovation, Amsterdam, discuss the social and psychological implications of AI, from reshaping the fashion design process to predictive policing.

Speaking ahead of the event, Lauren Waardenburg said: Predictive policing is quite a new phenomenon and gives one of the first examples of real-world data translators, which is quite a new and upcoming type of work that many organisations are interested in. However, there are unintended consequences for work and the use of AI if an organisation doesnt consider the large influence such data translators can have.

Similarly, AI in fashion is also a new phenomenon. The feedback of an AI system changes the way designers and stylists create and how they interpret their creative role in that process. The suggestions from the AI system put constraints on what designers can create. For example, the recommendations may be very specific in suggesting the colour palette, textile, and style of the garment. This level of nuanced guidelines not only limits what they can create, but it also puts pressure on their self-identification as a creative person.

The technology we encounter and use daily changes at a pace that is hard for us to truly take stock of, with every new device release, software update and new social media platform creating ripple effects. In How is tech changing how we work, think and feel? (14 March), a panel of technologists look at current and near-present mainstream technology to better understand how we think and feel about data and communication. With Dr David Stillwell, Lecturer in Big Data Analytics and Quantitative Social Science at Cambridge Judge Business School; Tyler Shores, PhD researcher at the Faculty of Education; Anu Hautalampi, Head of social media for the University of Cambridge; and Dex Torricke-Barton, Director of the Brunswick Group and former speechwriter and communications for Mark Zuckerberg, Elon Musk, Eric Schmidt and United Nations. They discuss some of the data and trends that illustrate the impact tech has upon our personal, social, and emotional lives as well as discussing ways forward and what the near future holds.

Tyler Shores commented: One thing is clear: the challenges that we face that come as a result of technology do not necessarily have solutions via other forms of technology, and there can be tremendous value for all of us in reframing how we think about how and why we use digital technology in the ways that we do.

The second week of the Festival considers the ethical concerns of AI. In Can we regulate the internet? (16 March), Dr Jennifer Cobbe, The Trust & Technology Initiative, Professor John Naughton, Centre for Research in the Arts, Social Sciences and Humanities, and Dr David Erdos, Faculty of Law, ask: How can we combat disinformation online? Should internet platforms be responsible for what happens on their services? Are platforms beyond the reach of the law? Is it too late to regulate the internet? They review current research on internet regulation, as well as ongoing government proposals and EU policy discussions for regulating internet platforms. One argument put forward is that regulating internet platforms is both possible and necessary.

When you think of artificial intelligence, do you get excited about its potential and all the new possibilities? Or rather, do you have concerns about AI and how it will change the world as we know it? In Artificial intelligence, the human brain and neuroethics (18 March), Tom Feilden, BBC Radio 4 and Professor Barbara Sahakian, Department of Psychiatry, discuss the ethical concerns.

In Imaging and vision in the age of artificial intelligence (19 March), Dr Anders Hansen, Department of Applied Mathematics and Theoretical Physics, also examines the ethical concerns surrounding AI. He discusses new developments in AI and demonstrates how systems designed to replace human vision and decision processes can behave in very non-human ways.

Dr Hansen said: AI and humans behave very differently given visual inputs. A human doctor presented with two medical images that, to the human eye are identical, will provide the same diagnosis for both cases. The AI, however, may on the same images give 99.9% confidence that the patient is ill based on one image, whereas on the other image (that looks identical) give 99.9% confidence that the patient is well.

Such examples demonstrate that the reasoning the AI is doing is completely different to the human. The paradox is that when tested on big data sets, the AI is as good as a human doctor when it comes to predicting the correct diagnosis.

Given the non-human behaviour that cannot be explained, is it safe to use AI in automated diagnosis in medicine, and should it be implemented in the healthcare sector? If so, should patients be informed about the non-human behaviour and be able to choose between AI and doctors?

A related event explores the possibilities of creating AI that acts in more human ways. In Developing artificial minds: joint attention and robotics (21 March), Dr Mike Wilby, lecturer in Philosophy at Anglia Ruskin University, describes how we might develop our distinctive suite of social skills in artificial systems to create benign AI.

One of the biggest challenges we face is to ensure that AI is integrated into our lives, such that, in addition to being intelligent and partially autonomous, AI is also transparent, trustworthy, responsive and beneficial, Dr Wilby said.

He believes that the best way to achieve this would be to integrate it into human worlds in a way that mirrors the structure of human development. Humans possess a distinctive suite of social skills that partly explains the uniquely complex and cumulative nature of the societies and cultures we live within. These skills include the capacity for collaborative plans, joint attention, joint action, as well as the learning of norms of behaviour.

Based on recent ideas and developments within Philosophy, AI and Developmental Psychology, Dr Wilby examines how these skills develop in human infants and children and suggests that this gives us an insight into how we might be able to develop benign AI that would be intelligent, collaborative, integrated and benevolent.

Further related events include:

Bookings open on Monday 10 February at 11am.

Image: Artificial_intelligence_by_gerd_altmann

Read the original post:
Cambridge Science Festival examines the effects and ethics of artificial intelligence - Cambridge Network

Artificial Intelligence Could Revolutionize the Study of Jewish Law. Is That a Good Thing? – Mosaic

As early as the 1960s, scholars and technicians began the task of digitizing halakhic literature, making it possible to search quickly through an ever-growing group of texts. Technological advances since then have improved the quality of searches, sped up the pace of digitization, and made such tools accessible to anyone with smartphone. Now, write Moshe Koppel and Avi Shmidman, machine learning and artificial intelligence can do much more: they can make texts penetrable to the lay reader by adding vowel-markings and punctuation while spelling out abbreviations, create critical editions by comparing early editions and manuscripts, and even compose lists of sources on a single topic.

After explaining the vast potential created by these new technologies, Koppel and Shmidman discuss both their benefits and their costs, beginning with the fact that a layperson will soon be able to navigate a textual tradition with an ease previously reserved for the sophisticated scholar:

On the one hand, this [change] is a blessing: it broadens the circle of those participating in one of the defining activities of Judaism, [namely Torah study], including those on the geographic or social periphery of Jewish life. [On the other hand], the traditional process of transmission of Torah from teacher to student and from generation to generation is such that much more than raw text or hard information is transmitted. Subtleties of emphasis and attitudewhich topics are central, what is a legitimate question, who is an authority, what is the appropriate degree of deference to such authorities, which values should be emphasized and which honored only in the breach, when must exceptions be made, and much moreare transmitted as well.

All this could be lost, or at least greatly undervalued, as the transmission process is partially short-circuited by technology; indeed, signs of this phenomenon are already evident with the availability of many Jewish texts on the Internet.

And moving further into the future, what if computer scientists could create a sort of robot rabbi, using the same sort of artificial intelligence that has been used to defeat the greatest chess masters or Jeopardy champions?

[S]uch a tool could very well turn out to be corrosive, and for a number of reasons. First, programs must define raw inputs upfront, and these inputs must be limited to those that are somehow measurable. The difficult-to-measure human elements that a competent[halakhic authority]would take into account would likely be ignored by such programs. Second, the study of halakhah might be reduced from an engaging and immersive experience to a mechanical process with little grip on the soul.

Third, just as habitual use of navigation tools like Waze diminish our navigating skills, habitual use of digital tools for[answering questions of Jewish law]is likely to dry up our halakhic intuitions. In fact, framing halakhah as nothing but a programmable function that maps situations to outputs likedo/dontis likely to reduce it in our minds from an exalted heritage to one arbitrary function among many theoretically possible ones.

Read more at Lehrhaus

More about: Artifical Intelligence, Halakhah, Judaism, Technology

Read more:
Artificial Intelligence Could Revolutionize the Study of Jewish Law. Is That a Good Thing? - Mosaic

Latinos, Alzheimer’s and Artificial Intelligence – AL DIA News

Alzheimer's is one of the growing diseases that cause death in the United States. More than 5.8 million Americans currently have the disease. By 2050, nearly 14 million people in the United States over the age of 65 could be living with the disease unless scientists develop new approaches to prevent or cure it.

The limited inclusion of Latinos and African Americans in research will only worsen the outlook, although successful efforts across the country could help us keep up with the disease.

The face of Alzheimer's disease is changing, mainly because the number one risk factor is old age. By 2030, the number of Latinos over 65 will have grown by 224 percent compared to 65 percent among non-Hispanic whites.

Senator Amy Klobuchar, in her 2019 election program, stated that by 2030 Latinos and African Americans will constitute nearly 40% of the 8.4 million Americans living with Alzheimer's.

Much of this research has been conducted by the organization UsAgainstAlzheimer's who claim that studies in the United States focus on less than 4% in communities of color. Overall, only 5% of the reviews included a variant for recruiting underrepresented populations such as Latinas or African Americans. The studies surprisingly overlook the fact that African Americans are two to three times more likely to develop Alzheimer's than non-Hispanic whites, while Latinos are 1.5 times more likely.

Similarly, the growing impact of the disease increases costs in Latino families. For example, the total cost of Alzheimer's disease in the Latino community will reach $2.3 trillion by 2060 if the disease's trajectory continues on its current course.

Artificial Intelligence: A possibility

A team of researchers led by UC Davis Health professor Brittany Dugger received a $3.8 million grant from the National Institute on Aging (NIA) to help define the neuropathology of Alzheimer's disease in Hispanic cohorts. The grant will fund the first large-scale initiative to present a detailed description of the brain manifestations of Alzheimer's disease in people of Mexican, Cuban, Puerto Rican, and Dominican descent.

"There is little information on the pathology of dementia affecting people of minority groups, especially for people of Mexican, Cuban, Puerto Rican, and Dominican descent," Brittany Dugger said in a news release.

The research will include the study of post-mortem brain tissue donated by more than 100 people from a diverse group of the countries mentioned above.

In partnership with Michael Keizer of UC San Francisco, the researchers will use artificial intelligence and machine learning to locate different pathologies in the brain and thus define the neuropathological landscape of Alzheimer's disease.

The study's findings will help develop specific disease profiles for individuals. This profile will establish a basis for precise medical research to obtain the correct treatment for the right patient at the right time. This approach to medicine reduces disease disparities and advances medicine for all communities.

Read more here:
Latinos, Alzheimer's and Artificial Intelligence - AL DIA News

Manufacturing Companies Struggling with Artificial Intelligence Implementation – Water Technology Online

While manufacturing companies see the value in implementing artificial intelligence (AI) solutions, many are struggling to deliver clear results and are reevaluating their strategy, according to a new report. The report was commissioned by Plutoshift, a provider of automated performance monitoring for industrial workflows.

The findings revealed that almost two-thirds (61%) of manufacturing companies said they need to reevaluate the way they implement AI projects. The report, titled Breaking Ground on Implementing AI, uncovered that while companies are making progress with their AI initiatives, many planning and implementation struggles remain, from defining realistic outcomes to data collection and maturity to managing budget scope and more.

To gauge the progress and process of how manufacturing companies are implementing AI, and whether or not they are satisfied with their AI initiatives, Plutoshift surveyed 250 manufacturing professionals in October 2019 with visibility into their companys AI programs.

A major reason companies are rethinking their AI implementation plans is a lack of data infrastructure needed to fully use AI. 84% of respondents say their company cannot automatically and continuously act on their data intelligence.

The report uncovered further foundational challenges with successful AI implementation, including that 72% of manufacturing companies said it took more time than anticipated for their company to implement the technical/data collection infrastructure needed to take advantage of the benefits of AI.

Companies are forging ahead with the adoption of AI at an enterprise level, said Prateek Joshi, CEO and founder of Plutoshift. But despite the progress that some companies are making with their AI implementations, the reality thats often underreported is that AI initiatives are loosely defined. Companies in the middle of this transformation usually lack the proper technology and data infrastructure. In the end, these implementations can fail to meet expectations. The insights in this report show us that companies would strongly benefit by taking a more measured and grounded approach towards implementing AI.

Other key findings include:

See the article here:
Manufacturing Companies Struggling with Artificial Intelligence Implementation - Water Technology Online

Artificial Intelligence What it is and why it matters | SAS

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.

This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isnt that scary or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry. Keep reading for modern examples of artificial intelligence in health care, retail and more.

Why is artificial intelligence important?

Read more here:
Artificial Intelligence What it is and why it matters | SAS