Raytheon Developing Machine Learning that will Communicate what it Learned – Financialbuzz.com

Raytheon Company (NYSE: RTN) announced that it is developing a machine learning technology under a$6 millioncontract from the Defense Advanced Research Projects Agency for the Competency Aware Machine Learning program. According to the defense contractor and technology company, Systems will be able to communicate the abilities they have learned, the conditions under which the abilities were learned, the strategies they recommend and the situations for which those strategies can be used.

Ilana Heintz, principal investigator for CAML at Raytheon BBN Technologies explained that, The CAML system turns tools into partners It will understand the conditions where it makes decisions and communicate the reasons for those decisions.

The machine learning technology will learn from a video game like process. Meaning that instead of giving the system a specific set of rules, the developers will tell the system what choice it has in the game and what the end goal is. By repeatedly playing the game, the system will learn the most effective ways to meet the goal. When successful, the system will then explain itself, by recording the conditions and strategies it used to come up with successful outcomes.

People need to understand an autonomous systems skills and limitations to trust it with critical decisions, added Heintz.

Raytheon also reported that when the system has developed these skills, the team will apply it to a simulated search and rescue mission. Users will create the conditions of the mission, and in the meanwhile the system will make recommendations and give users information about its capability under the specific conditions. For example, the system might say, In the rain, at night, I can distinguish between a person and an inanimate object with 90 percent accuracy, and I have done this over 1,000 times.

Go here to read the rest:
Raytheon Developing Machine Learning that will Communicate what it Learned - Financialbuzz.com

Postdoc Research Assoc./Senior Research Assoc. – Image Processing/Computer Vision & Machine Learning job with LANCASTER UNIVERSITY | 192791 -…

Postdoctoral Research Associate/ Senior Research Associate in Image Processing/ Computer Vision and Machine Learning

In search of uniqueness harnessing anatomical hand variation (H-unique)

School of Computing and Communications

Salary: 28,331- 40,322

Closing Date: 29 February 2020

Interview Date: Mid-March 2020

Contract: Fixed Term

H-unique is a five year, 2.5M ERC-funded programme of research, led by Lancaster University. This will be the first multimodal automated interrogation of visible hand anatomy, through analysis and interpretation of human variation. This exciting research opportunity has arisen directly from the ground-breaking research undertaken by Prof Dame Sue Black in relation to the forensic identification of individuals from images of their anatomy captured in criminal cases.

Assessment of the evidential robustness of hand identification for prosecutorial purposes requires the degree of uniqueness in the human hand to be assessed through large volume image analysis. The research opens up the opportunity to develop new and exciting biometric capabilities with a wide range of real-world applications, from security access to border control whilst assisting the investigation of serious and organised crime on a global level.

This is an interdisciplinary project, supported by anatomists, anthropologists, geneticists, bioinformaticians, image analysts and computer scientists. We are investigating inherent and acquired variation in search of uniqueness, as the hand retains and displays a multiplicity of anatomical variants formed by different aetiologies (genetics, development, environment, accident etc).

The primary aim of the H-Unique project is the successful analysis and interpretation of anatomical variation in images of the human hand. This will be achieved by developing new image processing/computer vision methods to extract key features from human hand images (e.g. vein pattern, skin knuckle creases, tattoos, pigmentation pattern) in a way that is robust to changes in viewpoint, illumination, background, etc. The project will be successful if no two hands can be found to be identical, implying uniqueness. Large datasets are vital for this work to be legally admissible. Through citizen engagement with science, this research will collect images from over 5,000 participants, creating an active ground-truth dataset. It will examine and address the effects of variable image conditions on data extraction and will design algorithms that permit auto-pattern searching across large numbers of stored images of variable quality. This will provide a major novel breakthrough in the study of anatomical variation, with wide ranging, interdisciplinary and transdisciplinary impact.

We are seeking to appoint a Postdoctoral/ Senior Research Associate to work on feature extraction and biometric development. Hard biometrics, such as fingerprints, are well understood and some soft biometrics are gaining traction within both biometric and forensic domains (e.g. superficial vein pattern, skin crease pattern, morphometry, scars, tattoos and pigmentation pattern). A combinatorial approach of soft and hard biometrics has not previously been attempted from images of the hand. We will pioneer the development of new methods that will release the full extent of variation locked within the visible anatomy of the human hand and reconstruct its discriminatory profile as a retro-engineered multimodal biometric. A significant step change is required in the science to both reliably and repeatably extract and compare anatomical information from large numbers of images especially when the hand is not in a standard position or when either the resolution or lighting in the image is not ideal.

We invite applications from enthusiastic individuals who have a PhD or equivalent experience in a relevant discipline such as Computer Science or Electrical Engineering. You must be able to demonstrate a research background in the area of image processing, computer vision, and/or deep learning. Familiarity with image analysis methods, biometrics and machine learning/deep learning frameworks is not essential but will put you at an advantage. We will also value highly your ability to learn rapidly and to adapt to new technologies beyond your current skills and expertise. For more details, please see the Job Description/Person Specification for this position.

This Postdoctoral/ Senior Research Associate position is being offered on a 2-year fixed-term basis. For further information or an informal discussion please contact Dr Bryan Williams (b.williams6@lancaster.ac.uk), Prof Plamen Angelov (Email: p.angelov@lancaster.ac.uk) or Dr Hossein Rahmani (Email: h.rahmani@lancaster.ac.uk).

The School of Computing and Communications offers a highly inclusive and stimulating environment for career development, and you will be exposed to a range of further opportunities over the course of this post. We are committed to family-friendly and flexible working policies on an individual basis, as well as the Athena SWAN Charter, which recognises and celebrates good employment practice undertaken to address gender equality in higher education and research.

Lancaster University - ensuring equality of opportunity and celebrating diversity

Read the rest here:
Postdoc Research Assoc./Senior Research Assoc. - Image Processing/Computer Vision & Machine Learning job with LANCASTER UNIVERSITY | 192791 -...

Manufacturing 2020: 5G, AI, IoT And Cloud-Based Systems Will Take Over – DesignNews

Technology vendors expect that 2020 will be a big year for manufacturing plants to onboard digital systems. But will it happen? While digital systems IoT, machine learning, 5G, cloud-based systems have proven themselves as worthwhile investments, they may not get deployed widely.

For insight on what to expect in 2020, we turned to Rajeev Gollarahalli, chief business officer at 42Q, a cloud-based MES software division of Sanmina. Gollarahalli sees a manufacturing world that will take solid steps toward digitalization in 2020, but those steps are likely to be incremental rather than revolutionary.

5G On The Plant Floor

Design News: Will 5G increase the pace of digital factory transformation, and where it will have the most impact?

Rajeev Gollarahalli: Weve started to see a little of 5G popping up in the factory, but its limited. Its mostly still in the proof-of-concept stage. It will be some time before we see more, probably around the end of next 2020.

DN: Will 5G increase the pace of digital transformation?

Gollarahalli: Undoubtedly. Yet one limit is that in order to make accurate decisions, you need to be able to ingest high volumes of data in real-time. Thats been one of the limitations in infrastructure. When you can use 5G across the factory, youll have considerable infrastructure. That challenge with data is solved by 5G.

DN: What still needs to be done in order to deploy 5G?

Gollarahalli: You have the 5G service providers and 5G equipment manufacturers working together. Both are developing capabilities in their own silos. What has not yet matured is putting these together, whether its in health, discreet manufacturing, telecom, or aerospace. The use cases havent matured, but we are seeing more use cases piling up.

DN: What could spur equipment vendors and telecom to work together?

Gollarahalli: I think well see an industry consortium. That doesnt exist now. There are partners that are starting to talk. Verizon is working with network providers. Youre going to see two or three different groups emerge and come together to do standards. With the advent of 5G, and the emergence of IIoT, they are all going to come together. One of the limitations is the volume. We generate about a terabyte of data with IoT. The timing will be perfect for getting 5G utilized for IoT and get it widely adopted.

The Emerging Workforce Skilled In Digital Systems

DN: What changes in the plant workforce can we expect in the coming year?

Gollarahalli: The workforce will need a completely different set of skills to drive automation on the factory floor, and industry has to learn how to attract those workers People are saying manufacturing is contracting, but Im not seeing it. Manufacturing seems to be stable. As for skills for the factory of the future, we need to be re-tooling our employees. The employees today dont have the technical skills, but they have the domain skills. We need to get them the technical skills they need.

DN: Will the move to a workforce with greater technology skills be disruptive?

Gollarahalli: Youre not going to see mass layoffs, but youre going to see retooling the skills of the employees. We cant get them trained at the speed that technology is increasing. Were going to see more employees getting ready in trade schools and with degrees. What youre seeing is a convergence of data skills with AI and domain skills. An ideal skillset is someone who understands manufacturing and knows the data. For several years kids were moving away from STEM, wanting to learn the sexier stuff. But I think STEM is coming back.

Cloud-Based Systems For Security

DN: Will cloud-based systems be the go-to for manufacturing security versus on-premises security?

Gollarahalli: Five years ago, when I talked about cloud with customers, they asked whether it was real-time. That was when the infrastructure was not as secure. I have a network at home. That was unheard of 10 years ago in factories. Now that the infrastructure issue has been solved, the next step is security. I have always countered that you cant secure data on premises as well as you can in a cloud. A lot of money has poured into cloud-based security. No single company can match that. Its almost impossible to do it on premises.

AI, Machine Learning and Big Data Analytics

DN: Will advances in AI, machine learning, and analytics?

Gollarahalli: Were seeing AI and ML (machine learning) is some areas. Were seeing it implemented in some areas at 42Q. Most use cases are around asset management and quality. Its used to predict the quality of a product and to take preventive actions in asset maintenance. AI and ML are also popping up in supply chain management. 2020 will be the year of AI and ML. Its getting embedded into medical products. Youll see it pop up everywhere, showing up on the factory floor as well as in our consumer products.

DN: Is AI and machine learning going mainstream yet or is it mostly getting deployed by large manufacturers who are typically the early users?

Gollarahalli: Youre going to see it move down the supply chain to tier 2 and tier 3 suppliers. I dont think its just for the elite any more. Its getting adopted quickly, but it is not happening as quickly as I thought it would.

The Role Of IoT In Manufacturing

DN: Will we see growth in IoTs role in measuring and providing closed loop controls?

Gollarahalli: Were going to see it in manufacturing, regulating the humidity in the room or the temperature on the floor. They need closed loop from IoT. Theyre measuring with IoT, but the closed loop as not been adopted as quickly. We dont have the right standards. How do you do close loop with a system that is throwing off data in milliseconds. You must be able to use the IoT and those algorithms. If you can make them more efficient for closed loop control, youll see a lot more of it going forward.

Rob Spiegel has covered automation and control for 19 years, 17 of them for Design News. Other topics he has covered include supply chain technology, alternative energy, and cyber security. For 10 years, he was owner and publisher of the food magazine Chile Pepper.

January 28-30:North America's largest chip, board, and systems event,DesignCon, returns to Silicon Valleyfor its 25th year!The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard?Register to attend!

More:
Manufacturing 2020: 5G, AI, IoT And Cloud-Based Systems Will Take Over - DesignNews

Doctor’s Hospital focused on incorporation of AI and machine learning – EyeWitness News

NASSAU, BAHAMAS Doctors Hospital has depriortized its medical tourism program and is now more keenly focused on incorporating artificial intelligence and machine learning in healthcare services.

Dr Charles Diggiss, Doctors Hospital Health System president, revealed the shift during a press conference to promote the 2020 Bahamas Business Outlook conference at Baha Mar next Thursday.

When you look at whats happening around us globally with the advances in technology its no surprise that the way companies leverage data becomes a game changer if they are able to leverage the data using artificial intelligence or machine learning, Diggiss said.

In healthcare, what makes it tremendously exciting for us is we are able to sensorize all of the devices in the healthcare space, get much more information, use that information to tell us a lot more about what we should be doing and considering in your diagnosis.

He continued: How can we get information real time that would influence the way we manage your conditions, how can we have on the backend the assimilation of this information so that the best outcome occurs in our patient care environment.

Diggiss noted while the BISX-listed healthcare provider is still involved in medical tourism, that no longer is a primary focus.

We still have a business line of medical tourism but one of the things we do know pretty quickly in Doctors Hospital is to deprioritize if its apparent that that is not a successful ay to go, he said.

We have looked more at taking our specialities up a notch and investing in the technology support of the specialities with the leadership of some significant Bahamian specialists abroad, inviting them to come back home.

He added: We have depriortized medical tourism even though we still have a fairly robust programme going on at our Blake Road facility featuring two lines, a stem cell line a fecal microbiotic line.

They are both doing quite well but we are not putting a lot of effort into that right now compared to the aforementioned.

Read more:
Doctor's Hospital focused on incorporation of AI and machine learning - EyeWitness News

The bubbles in VR, cryptocurrency and machine learning are all part of the parallel computing bubble – Boing Boing

Yesterday's column by John Naughton in the Observer revisited Nathan Myhrvold's 1997 prediction that when Moore's Law runs out -- that is, when processors stop doubling in speed every 18 months through an unbroken string of fundamental breakthroughs -- that programmers would have to return to the old disciplines of writing incredibly efficient code whose main consideration was the limits of the computer that runs on it.

I'd encountered this idea several times over the years, whenever it seemed that Moore's Law was petering out, and it reminded me of a prediction I'd made 15 years ago: that as computers ceased to get faster, they would continue to get wider -- that is, that the price of existing processors would continue to drop, even if the speed gains petered out -- and that this would lead programmers towards an instinctual preference for solving the kinds of problems that could be solved in parallel (where the computing could be done on several processors at once, because each phase of the solution was independent of the others) and an instinctual aversion for problems that had to be solved in serial (where each phase of the solution too the output of the previous phase as its input, meaning all the steps had to be solved in order).

That's because making existing processors more cheaply only requires minor, incremental improvements in manufacturing techniques, while designing new processors that are significantly faster requires major breakthroughs in material science, chip design, etc. These breakthroughs aren't just unpredictable in terms of when they'll arrive, they're also unpredictable in terms of how they will play out. One widespread technique deployed to speed up processors is "branch prediction," wherein processors attempt to predict which instruction will follow the one it's currently executing and begin executing that without waiting for the program to tell it to do so. This gave rise to a seemingly unstoppable cascade of ghastly security defects that the major chip vendors are still struggling with.

So if you write a program that's just a little too slow for practical use, you can't just count on waiting a couple of months for a faster processor to come along.

But cheap processors continue to get cheaper. If you have a parallel problem that needs a cluster that's a little outside your budget, you don't need to rewrite your code -- you can just stick it on the shelf for a little while and the industry will catch up with you.

Reading Naughton's column made me realize that we were living through a parallel computation bubble. The period in which Moore's Law had declined also overlapped with the period in which computing came to be dominated by a handful of applications that are famously parallel -- applications that have seemed overhyped even by the standards of the tech industry: VR, cryptocurrency mining, and machine learning.

Now, all of these have other reasons to be frothy: machine learning is the ideal tool for empiricism-washing, through which unfair policies are presented as "evidence-based"; cryptocurrencies are just the thing if you're a grifty oligarch looking to launder your money; and VR is a new frontier for the moribund, hyper-concentrated entertainment industry to conquer.

It's possible that this is all a coincidence, but it really does feel like we're living in a world spawned by a Sand Hill Road VC in 2005 who wrote "What should we invest in to take advantage of improvements in parallel computing?" on top of a white-board.

That's as far as I got. Now what I'm interested in is what would a contrafactual look like? Say (for the purposes of the thought experiment) that processors had continued to gain in speed, but not in parallelization -- that, say, a $1000 CPU doubled in power every 18 months, but that there weren't production lines running off $100 processors in bulk that were 10% as fast.

What computing applications might we have today?

(Image: Xiangfu, CC BY-SA)

Amazon has reinstated FedEx as a ground delivery carrier for Prime members shipments. The online retailer said today the shipper consistently met its delivery requirements, after suspending it last month.

Comments filed with the FCC by AT&T, Frontier, Windstream and Ustelcom (an industry group representing telcoms companies) have asked the FCC to change the rules for its next, $20.4 billion/10 year rural broadband subsidy fund to allow them to offer slower service than the (already low) speeds the FCC has proposed.

If you are trying to find work in South Korea, you are likely to be interviewed by a bot that uses AI to scan your facial expressions to determine whether or not you are right for the job.

If you love wine and we mean, really love wine its a personal thing. You know what foods your favorite wine likes to mingle with and the ones they dont. You have a favorite time to drink. Youve read a thing or two about its history, maybe visited where it was made. Its []

If youre working with databases, youre working with SQL. Even in the changing world of the web, there are some classics that endure, and SQL (along with its database management system MySQL) is one of them. Millions of websites and databases have been built using SQL code as their foundation, and theyre still being built []

Do you know Python? If youre interested in any aspect of web development, data analytics or the Internet of Things, you should. Python is the computer language used to drive everything from that voice recognition software on your phone to the gaming apps you use to kill time. So yes, theres a market for those []

Visit link:
The bubbles in VR, cryptocurrency and machine learning are all part of the parallel computing bubble - Boing Boing

Machine Learning to Predict the 1-Year Mortality Rate After Acute Ante | TCRM – Dove Medical Press

Yi-ming Li,1,* Li-cheng Jiang,2,* Jing-jing He,1 Kai-yu Jia,1 Yong Peng,1 Mao Chen1

1Department of Cardiology, West China Hospital, Sichuan University, Chengdu, Peoples Republic of China; 2Department of Cardiology, The First Affiliated Hospital, Chengdu Medical College, Chengdu, Peoples Republic of China

*These authors contributed equally to this work

Correspondence: Yong Peng; Mao ChenDepartment of Cardiology, West China Hospital, Sichuan University, 37 Guoxue Street, Chengdu 610041, Peoples Republic of ChinaEmail pengyongcd@126.com; hmaochen@vip.sina.com

Abstract: A formal risk assessment for identifying high-risk patients is essential in clinical practice and promoted in guidelines for the management of anterior acute myocardial infarction. In this study, we sought to evaluate the performance of different machine learning models in predicting the 1-year mortality rate of anterior ST-segment elevation myocardial infarction (STEMI) patients and to compare the utility of these models to the conventional Global Registry of Acute Coronary Events (GRACE) risk scores. We enrolled all of the patients aged >18 years with discharge diagnoses of anterior STEMI in the Western China Hospital, Sichuan University, from January 2011 to January 2017. A total of 1244 patients were included in this study. The mean patient age was 63.812.9 years, and the proportion of males was 78.4%. The majority (75.18%) received revascularization therapy. In the prediction of the 1-year mortality rate, the areas under the curve (AUCs) of the receiver operating characteristic curves (ROCs) of the six models ranged from 0.709 to 0.942. Among all models, XGBoost achieved the highest accuracy (92%), specificity (99%) and f1 score (0.72) for predictions with the full variable model. After feature selection, XGBoost still obtained the highest accuracy (93%), specificity (99%) and f1 score (0.73). In conclusion, machine learning algorithms can accurately predict the rate of death after a 1-year follow-up of anterior STEMI, especially the XGBoost model.

Keywords: machine learning, prediction model, acute anterior myocardial infarction

This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License.By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.

Link:
Machine Learning to Predict the 1-Year Mortality Rate After Acute Ante | TCRM - Dove Medical Press

Forget Machine Learning, Constraint Solvers are What the Enterprise Needs – – RTInsights

Constraint solvers take a set of hard and soft constraints in an organization and formulate the most effective plan, taking into account real-time problems.

When a business looks to implement an artificial intelligence strategy, even proper expertise can be too narrow. Its what has led many businesses to deploy machine learning or neural networks to solve problems that require other forms of AI, like constraint solvers.

Constraint solvers take a set of hard and soft constraints in an organization and formulate the most effective plan, taking into account real-time problems. It is the best solution for businesses that have timetabling, assignment or efficiency issues.

In a RedHat webinar, principal software engineer, Geoffrey De Smet, ran through three use cases for constraint solvers.

Vehicle Routing

Efficient delivery management is something Amazon has seemingly perfected, so much so its now an annoyance to have to wait 3-5 days for an item to be delivered. Using RedHats OptaPlanner, businesses can improve vehicle routing by 9 to 18 percent, by optimizing routes and ensuring drivers are able to deliver an optimal amount of goods.

To start, OptaPlanner takes in all the necessary constraints, like truck capacity and driver specialization. It also takes into account regional laws, like the amount of time a driver is legally allowed to drive per day and creates a route for all drivers in the organization.

SEE ALSO: Machine Learning Algorithms Help Couples Conceive

In a practical case, De Smet said RedHat saved a technical vehicle routing company over $100 million in savings per year with the constraint solver. Driving time was reduced by 25 percent and the business was able to reduce its headcount by 10,000.

The benefits [of OptaPlanner] are to reduce cost, improve customer satisfaction, employee well-being and save the planet, said De Smet. The nice thing about some of these are theyre complementary, for example reducing travel time also reduces fuel consumption.

Employee timetabling

Knowing who is covering what shift can be an infuriating task for managers, with all the requests for time off, illness and mandatory days off. In a place where 9 to 5 isnt regular, it can be even harder to keep track of it all.

RedHats OptaPlanner is able to take all of the hard constraints (two days off per week, no more than eight-hour shifts) and soft constraints (should have up to 10 hours rest between shifts) and can formulate a timetable that takes all that into account. When someone asks for a day off, OptaPlanner is able to reassign workers in real-time.

De Smet said this is useful for jobs that need to run 24/7, like hospitals, the police force, security firms, and international call centers. According to RedHats simulation, it should improve employee well-being by 19 to 85 percent, alongside improvements in retention and customer satisfaction.

Task assignment

Even within a single business department, there are skills only a few employees have. For instance, in a call center, only a few will be able to speak fluently in both English and French. To avoid customer annoyance, it is imperative for employees with the right skill-set to be assigned correctly.

With OptaPlanner, managers are able to add employee skills and have the AI assign employees correctly. Using the call center example again, a bilingual advisor may take all calls in French for one day when theres a high demand for it, but on others have a mix of French and English.

For customer support, the constraint solver would be able to assign a problem to the correct advisor, or to the next best thing, before the customer is connected, thus avoiding giving out the wrong advice or having to pass the customer on to another advisor.

In the webinar, De Smet said that while the constraint solver is a valuable asset for businesses looking to reduce costs, this shouldnt be their only aim.

Without having all stakeholders involved in the implementation, the AI could end up harming other areas of the business, like customer satisfaction or employee retention. This is a similar warning given from all analysts on AI implementation it needs to come from a genuine desire to improve the business to get the best outcome.

See more here:
Forget Machine Learning, Constraint Solvers are What the Enterprise Needs - - RTInsights

Limits of machine learning – Deccan Herald

Suppose you are driving a hybrid car with a personalised Alexa prototype and happen to witness a road accident. Will your Alexa automatically stop the car to help the victim or call an ambulance? Probably,it would act according tothe algorithmprogrammed into itthat demands the users command.

But as a fellow traveller with Alexa, what would you do? If you areanempathetic human being, you would try to administer first aid and take the victim to a nearby hospital in your car. This empathy is what is missing in the machines, largely in the technocratic conquered education which parents are banking upon these days.

Tech-buddies

With the advancement of bots or robots teaching in our classrooms, theteachersof millennials are worried. Recently, a WhatsApp video of AI-teacher engaging class in one of the schools of Bengaluru went viral. Maybe in a decade or two, academic robots in our classrooms would teach mathematics. Or perhaps they will teach children the algorithmsthatbrings them to life and togetherthey can create another generation of tech-buddies.

I was informed by a friend that coding is taught atprimary level now which was indeed a surprise for me. Then what about other skills? Maybe life skills like swimming, cooking could also be taught by a combination of YouTube and personal robots. However, we have the edge over the machines in at least one area and thats basic human values. This is where human intervention cant be eliminated at all.

The values are not taught; rather they are ingrained at every phase of life by various people who we meet including parents, teachers, peers, and anyone around us alongside practising them. Say for example, how does one teach kids to care for the elderly at home?

Unless one feels the same emotional turmoilas the elderly before them as they are raised and apply the compassionate values, they wouldnt be motivated to take care of them.

The missing link in academia

The discussions on trans-disciplinary or interdisciplinary courses often put forward multiple subjects as well as unconventional subjects to study together. Like engineering and terracotta designs or literature and agriculture. However, the objection comes within academia citing a lack of career prospects.

We tend to forget the fact that the best mathematicians were also musicians and the best medicinal practitioners were botanists or farmers too. Interest in one subject might trigger gaining expertise in others and connect the discreet dots to create a completely new concept.

Life skills like agriculture, pottery, animal care, gardening, andhousing are essentialskills that have many benefits.Every rural person is equipped with these skills through surrounding experiences. Rather than in a classroom session, these learning takes place by seeing, interacting as well as making mistakes.

A friend who homeschooled both her kids had similar concerns. She was firmly against the formalised education which teaches a limited amount of information mostly based on memorisation taking out the natural interest of the child. Several such institutes are functioning to serve the same goals of lifelong learning. Such schools aiming at understanding human-nature, emotional wellbeing, artistic and critical thinking are fundamentally guided on the idea of learning in a fear-free environment.

When scrolling on the admissions page in these schools, I was surprised that the admissions for the 2021 academic year were already completed.This reflects the eagerness of many parents looking for such alternative education systems.

These analogies bring back the basic question of why education? If it is merely for technology-driven jobs, probably by the time your kids grow there wouldnt be many jobs as themachines would have snatched them.

Also, the country is moving towards a technology-driven economy and may not need many skilled labourers. Surely, a few post-millennials would survive in any condition if they are extremely smart and adoptive butthey may need to stop and reboot if theireducation has not prepared them for uncertainties to come.

(The writer is with Christ, Bengaluru)

Read the original:
Limits of machine learning - Deccan Herald

How Will Your Hotel Property Use Machine Learning in 2020 and Beyond? | – Hotel Technology News

Every hotel should ask the same question. How will our property use machine learning? Its not just a matter of gaining a competitive advantage; its imperative in order to stay in business.By Jason G. Bryant, Founder and CEO, Nor1 - 1.9.2020

Artificial intelligence (AI) implementation has grown 270% over the past four years and 37% in the past year alone, according to Gartners 2019 CIO Survey of more than 3,000 executives. About the ubiquity of AI and machine learning (ML) Gartner VP Chris Howard notes, If you are a CIO and your organization doesnt use AI, chances are high that your competitors do and this should be a concern, (VentureBeat). Hotels may not have CIOs, but any business not seriously considering the implications of ML throughout the organization will find itself in multiple binds, from the inability to offer next-level guest service to operational inefficiencies.

Amazon is the poster child for a sophisticated company that is committed to machine learning both in offers (personalized commerce) as well as behind the scenes in their facilities. Amazon Founder & CEO Jeff Bezos attributes much of Amazons ongoing financial success and competitive dominance to machine learning. Further, he has suggested that the entire future of the company rests on how well it uses AI. However, as Forbes contributor Kathleen Walsh notes, There is no single AI group at Amazon. Rather, every team is responsible for finding ways to utilize AI and ML in their work. It is common knowledge that all senior executives at Amazon plan, write, and adhere to a six-page business plan. A piece of every business plan for every business function is devoted to answering the question: How will you utilize machine learning this year?

Every hotel should ask the same question. How will our property use machine learning? Its not just a matter of gaining a competitive advantage; its imperative in order to stay in business. In the 2017 Deloitte State of Cognitive Survey, which canvassed 1,500 mostly C-level executives, not a single survey respondent believed that cognitive technologies would not drive substantive change. Put more simply: every executive in every industry knows that AI is fundamentally changing the way we do business, both in services/products as well as operations. Further, 94% reported that artificial intelligence would substantially transform their companies within five years, most believing the transformation would occur by 2020.

Playing catch-up with this technology can be competitively dangerous as there is significant time between outward-facing results (when you realize your competition is outperforming you) and how long it will take you to achieve similar results and employ a productive, successful strategy. Certainly, revenue management and pricing will be optimized by ML, but operations, guest service, maintenance, loyalty, development, energy usage, and almost every single aspect of the hospitality enterprise will be impacted as well. Any facility where the speed and precision of tactical decision making can be improved will be positively impacted.

Hotels are quick to think that when ML means robotic housekeepers and facial recognition kiosks. While these are possibilities, ML can do so much more. Here are just a few of the ways hotels are using AI to save money, improve service, and become more efficient.

Hiltons Energy Program

The LightStay program at Hilton predicts energy, water, and waste usage and costs. The company can track actual consumption against predictive models, which allows them to manage year-over-year performance as well as performance against competitors. Further, some hotel brands can link in-room energy to the PMS so that when a room is empty, the air conditioner automatically turns off. The future of sustainability in the hospitality industry relies on ML to shave every bit off of energy usage and budget. For brands with hundreds and thousands of properties, every dollar saved on energy can affect the bottom line in a big way.

IHG & Human Resources

IHG employs 400,000 people across 5,723 hotels. Holding fast to the idea that the ideal guest experience begins with staff, IHG implemented AI strategies tofind the right team member who would best align and fit with each of the distinct brand personalities, notes Hazel Hogben, Head of HR, Hotel Operations, IHG Europe. To create brand personas and algorithms, IHG assessed its top customer-facing senior managers across brands using cognitive, emotional, and personality assessments. They then correlated this with KPI and customer data. Finally, this was cross-referenced with values at the different brands. The algorithms are used to create assessments to test candidates for hire against the personas using gamification-based tools, according to The People Space. Hogben notes that in addition to improving the candidate experience (they like the gamification of the experience), it has also helped in eliminating personal or preconceived bias among recruiters. Regarding ML uses for hiring, Harvard Business Review says in addition to combatting human bias by automatically flagging biased language in job descriptions, ML also identifies highly qualified candidates who might have been overlooked because they didnt fit traditional expectations.

Accor Hotels Upgrades

A 2018 study showed that 70% of hotels say they never or only sometimes promote upgrades or upsells at check-in (PhocusWire). In an effort to maximize the value of premium inventory and increase guest satisfaction, Accor Hotels partnered with Nor1 to implement eStandby Upgrade. With the ML-powered technology, Accor Hotels offers guests personalized upgrades based on previous guest behavior at a price that the guest has shown a demonstrated willingness to pay at booking and during the pre-arrival period, up to 24 hours before check-in. This allows the brand to monetize and leverage room features that cant otherwise be captured by standard room category definitions and to optimize the allocation of inventory available on the day of arrival. ML technology can create offers at any point during the guest pathway, including the front desk. Rather than replacing agents as some hotels fear, it helps them make better, quicker decisions about what to offer guests.

Understanding Travel Reviews

The luxury Dorchester Collection wanted to understand what makes their high-end guests tick. Instead of using the traditional secret shopper methods, which dont tell hotels everything they need to know about their experience, Dorchester Collection opted to analyze traveler feedback from across major review sites using ML. Much to their surprise, they discovered Dorchesters guests care a great deal more about breakfast than they thought. They also learned that guests want to customize breakfast, so they removed the breakfast menu and allowed guests to order whatever they like. As it turns out, guests love this.

In his May 2019 Google I/O Address, Google CEO Sundar Pichai said, Thanks to advances in AI, Google is moving beyond its core mission of organizing the worlds information. We are moving from a company that helps you find answers to a company that helps you get things done (ZDNet). Pichai has long held that we no longer live in a mobile-first world; we now inhabit an AI-first world. Businesses must necessarily pivot with this shift, evolving processes and products, sometimes evolving the business model, as in Googles case.

Hotels that embrace ML across operations will find that the technologies improve processes in substantive ways. ML improves the guest experience and increases revenue with precision decisioning and analysis across finance, human resources, marketing, pricing and merchandising, and guest services. Though the Hiltons, Marriotts, and IHGs of the hotel world are at the forefront of adoption, ML technologies are accessibleboth in price and implementationfor the full range of properties. The time has come to ask every hotel department: How will you use AI this year?

For more about Machine Learning and the impact on the hotel industry, download NOR1s ebook The Hospitality Executives Guide to Machine Learning: Will You Be a Leader, Follower, or Dinosaur?

Jason G. Bryant, Nor1 Founder and CEO, oversees day-to-day operations, provides visionary leadership and strategic direction for the upsell technology company. With Jason at the helm, Nor1 has matured into the technology leader in upsell solutions. Headquartered in Silicon Valley, Nor1 provides innovative revenue enhancement solutions to the hospitality industry that focus on the intersection of machine learning, guest engagement and operational efficiency. A seasoned entrepreneur, Jason has over 25 years experience building and leading international software development and operations organizations.

Related

Go here to see the original:
How Will Your Hotel Property Use Machine Learning in 2020 and Beyond? | - Hotel Technology News

Dell’s Latitude 9510 shakes up corporate laptops with 5G, machine learning, and thin bezels – PCWorld

Dell's Latitude 9510 shakes up corporate laptops with 5G, machine learning, and thin bezels | PCWorld ');consent.ads.queue.push(function(){ try { IDG.GPT.addDisplayedAd("gpt-superstitial", "true"); $('#gpt-superstitial').responsiveAd({screenSize:'971 1115', scriptTags: []}); IDG.GPT.log("Creating ad: gpt-superstitial [971 1115]"); }catch (exception) {console.log("Error with IDG.GPT: " + exception);} }); This business workhorse has a lot to like.

Dell Latitude 9510 hands-on: The three best features

Dell's Latitude 9510 has three features we especially love: The integrated 5G, the Dell Optimizer Utility that tunes the laptop to your preferences, and the thin bezels around the huge display.

Today's Best Tech Deals

Picked by PCWorld's Editors

Top Deals On Great Products

Picked by Techconnect's Editors

The Dell Latitude 9510 is a new breed of corporate laptop. Inspired in part by the companys powerful and much-loved Dell XPS 15, its the first model in an ultra-premium business line packed with the best of the best, tuned for business users.

Announced January 2 and unveiled Monday at CES in Las Vegas, the Latitude 9510 weighs just 3.2 pounds and promises up to 30 hours of battery life.PCWorld had a chance to delve into the guts of the Latitude 9510, learning more about whats in it and how it was built. Here are the coolest things we saw:

The Dell Latitude 9510 is shown disassembled, with (top, left to right) the magnesium bottom panel, the aluminum display lid, and the internals; and (bottom) the array of ports, speaker chambers, keyboard, and other small parts.

The thin bezels around the 15.6-inch screen (see top of story) are the biggest hint that the Latitude 9510 took inspiration from its cousin, the XPS 15. Despite the size of the screen, the Latitude 9510 is amazingly compact. And yet, Dell managed to squeeze in a camera above the displaythanks to a teeny, tiny sliver of a module.

A closer look at the motherboard of the Dell Latitude 9510 shows the 52Wh battery and the areas around the periphery where Dell put the 5G antennas.

The Latitude 9510 is one of the first laptops weve seen with integrated 5G networking. The challenge of 5G in laptops is integrating all the antennas you need within a metal chassis thats decidedly radio-unfriendly.

Dell made some careful choices, arraying the antennas around the edges of the laptop and inserting plastic pieces strategically to improve reception. Two of the antennas, for instance, are placed underneath the plastic speaker components and plastic speaker grille.

The Dell Latitude 9510 incorporated plastic speaker panels to allow reception for the 5G antennas underneath.

Not ready for 5G? No worries. Dell also offers the Latitude 9510 with Wi-Fi 6, the latest wireless networking standard.

You are constantly asking your PC to do things for you, usually the same things, over and over. Dells Optimizer software, which debuts on the Latitude 9510, analyzes your usage patterns and tries to save you time with routine tasks.

For instance, the Express SignIn feature logs you in faster. The ExpressResponse feature learns which applications you fire up first and loads them faster for you. Express Charge watches your battery usage and will adjust settings to save bettery, or step in with faster charging when you need some juice, pronto. Intelligent Audio will try to block out background noise so you can videoconference with less distraction.

The Dell Latitude 9510s advanced features and great looks should elevate corporate laptops in performance as well as style.It will come in clamshell and 2-in-1 versions, and is due to ship March 26. Pricing is not yet available.

Melissa Riofrio spent her formative journalistic years reviewing some of the biggest iron at PCWorld--desktops, laptops, storage, printers. As PCWorld's Executive Editor she leads PCWorlds content direction and covers productivity laptops and Chromebooks.

See more here:
Dell's Latitude 9510 shakes up corporate laptops with 5G, machine learning, and thin bezels - PCWorld