Top Artificial Intelligence Books Released In 2019 That You Must Read – Analytics India Magazine

Artificial Intelligence has had many breakthroughs in 2019. In fact, we can go as far as to say that it has trickled down to every single facet of modern life. With its intervention in our daily life, it is imperative that everyone knows about how it is affecting our lives, bringing about change in it, the threats and possible solutions.

While there are some people who still think AI is only robots and chatbots, it is important that they know of the advancements in the field. There are many online courses and books on artificial intelligence that give a comprehensive understanding to the reader whether it is a professional or an AI enthusiast.

In this article, we have compiled a list of books on artificial intelligence published in 2019 that one can use to learn more about this fascinating technology:

Written by Dr Eric Topol, an American cardiologist, geneticist and digital medicine researcher, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, is Amazon #1 bestseller this year.

This book boldly sets out the potential of AI in healthcare and deep medicine. Topol calls AI the next industrial revolution. The book contains short examples to highlight AIs importance along with a proper expansion on likely AI is going to transform the medical industry. Topol believes that AI can not only help in enhancing diagnosis and treatment but also help them in saving time in other activities like taking notes, reading scans which will eventually help them to spend more time on the patients. This is a resourceful book for someone interested in AI and its impact on healthcare.

Written by Dr Stuart Russell, Human Compatible: AI and the Problem of Control is possibly one of the most important books of this year on AI. The book talks about the threats by artificial intelligence and solutions to it. The author, Stuart Russell, makes use dry humour not to make his book sound like a boring information magazine.

The book is for both the public and AI researches, Stuart Russel, in this doesnt hammer AI, he points out the threats and solution as someone who feels a sense of responsibility towards the changes and revolution his own field is bringing.

This book is written by Marcus du Sautoy, a professor of mathematics at the University of Oxford and a researcher fellow at the Royal Society.

This book is a fact-packed, funny journey to the world of AI. It questions the present meaning of the word creativity and about how the machine will be able to crack the code on human emotions.

This book dances around the concept of using AI assistance in art-making. The book discusses the math behind ML and AI as its centre point of discussion in art.

Janelle Shanes AIwierdness.com is an AI humour blog and looks to have a different take on AI, the part of AI. In this book, the author makes use of humorous cartoons and pop-culture illustrations to try and take a look inside the algorithms that are used in machine learning.

The authors of this book Gary Marcus, a scientist and the founder and CEO of Robust.AI and Ernest Davis, a professor of computer science at NYU tell what AI is, what it is not, its potentials if we worked towards it with more resilience and be more creative. Many authors seem to hype up AI, not just the good part about it but also the wrong side about it. The authors here seem to have found the balance in between.

The book, Rebooting AI: Building Artificial Intelligence We Can Trust, highlights the weaknesses of the current technology, where it is going wrong and what should we be doing to find the solutions. It isnt just some book that only researchers can read but also for the general public. It illustrates many examples and excellent use of humour wherever needed.

The first edition of the series of books written by the Alex Castrounis, answer one of the most critical questions in todays age concerning business and AI, How can I build a successful business by using AI?

The AI for People and Business: A Framework for Better Human Experiences and Business Success is exclusively written for anyone interested in making use of AI in their organisation.

The author examines the value of Ai and gives solutions for developing an AI strategy that benefits both people and businesses.

This book by Andriy Burkov remains true to its name and just manages to do the seemingly impossible task of trying to bundle all of the machine learning inside of a hundred-page book.

This book provides an in-depth introduction to the field of machine learning with the smart choice of topics for both theory and practice.

If you are new to the field of machine learning, then this book gives you a comprehensive introduction to the vocabulary/ terminology.

comments

Read more:

Top Artificial Intelligence Books Released In 2019 That You Must Read - Analytics India Magazine

Squirrel AI Learning Attends the Web Summit to Talk About the Application and Breakthrough of Artificial Intelligence in the Field of Education -…

Squirrel AI Learning is not only a global leader in artificial intelligence education enterprises, but also the only Chinese high-tech education enterprise that is invited to participate in this event. Derek Li, Founder and Chief Educational Technology Scientist of Squirrel AI Learning, gathered in the same hall with Tony Blair, former British Prime Minister, Brad Smith, President of Microsoft, Ping Guo, Vice Chairman and Rotating Chairman of Huawei, Marc Raibert, Founder and CEO of Boston Dynamics and other big names, which brought brilliant sharing and demonstration to everyone.

With a long history, Web Summit has been held once a year since 2009. After ten years of development, it has become a world-renowned and large-scale technology event, and the 2019 Summit has attracted attention from all walks of life. The event not only brought together more than 70,000 leaders of technology enterprises, founders of start-ups and policy makers from more than 160 countries, but also invited more than 2,600 media from all over the world to attend the summit, which has a powerful influence in the world.

Gathering of Big Names to Discuss the Changes Brought by the Latest Technology

Although the concept of artificial intelligence is hot, the specific empowerment of artificial intelligence in all walks of life cannot be accomplished at one stroke. At the "Davos Forum of Tech-geeks", many guests shared wonderful perspectives, and expressed their opinions around transportation technology, artificial intelligence, financial technology, earth technology, future technology, wearable devices, big data, front-end design, content creation, fashion and music industry technology and other fields.

Ping Guo, Vice Chairman and Rotating CEO of Huawei, explained the golden opportunities that 5G may bring to the development of all from the perspective of 5G technology. Ping Guo said that"5G + X" will bring an "Age of Wisdom where X can be artificial intelligence, big data, augmented reality, virtual reality and other technologies." He also predicted that about sixty 5G commercial networks will be put into use by the end of this year, and the 5G era will come earlier than we expected.

Marc Raibert, Founder and CEO of Boston Dynamics, showed everyone the first commercial intelligent robot dog, Spot, which is a four-legged mobile intelligent robot that can identify the environment, avoid obstacles, and perform complex tasks such as exploration, patrol and logistics transportation.

AI is Applied to Education, and Teaching Students According to Their Aptitude Promotes Educational Equality

With the development of AI technology, many industries around the world are facing new changes. Education is the foundation of the nation, and how technology empowers traditional education industry has always been paid much attention. On the day of the summit, Derek Li, Founder and Chief Educational Technology Scientist of Squirrel AI Learning, delivered a wonderful speech, which brought the whole event to a climax.

As the first company in China that developed an artificial intelligence self-adaptive learning engine with complete independent intellectual property rights and advanced algorithms as the core, Squirrel AI Learning has used a variety of AI technologies, such as evolutionary algorithms, neural network technology, machine learning, graph theory and Bayesian networks, to recommend personalized learning solutions to students in the past few years of practice. The further in-depth application of technology and the real-time improvement and update of products are closely related to the education status and future of hundreds of millions of students. Derek Li, its Founder and Chief Educational Technology Scientist, first introduced everyone the overall architecture of Squirrel AI Learning at the summit.

Squirrel AI Learning intelligent adaptive learning system provides a student-centered intelligent and personalized education, which applies artificial intelligence technology in the instructional process of teaching, learning, assessment, testing and training, to achieve the purpose of surpassing the real person teaching on the basis of simulating excellent teachers.

Squirrel AI Learning uses more than ten algorithms, deep learning and other technologies. It has MCM ability training system (Model of Thinking, Capacity and Methodology), cause-of-mistakes knowledge map reconstruction, nanoscale knowledge point decomposition, association probability of non-associated knowledge points, MIBA and other global first AI application technologies. It can accurately give the most suitable learning path for each child, drive learning with interest and encouragement, and improve learning efficiency. In addition, Squirrel AI Learning adopts the mode of artificial intelligence + real human teachers to effectively solve the problems of high class cost, few famous teacher resources and low learning efficiency of traditional education, so as to promote education equality.

Later, Derek Li shared three real stories to everyone, which made the audience more directly feel the achievements of Squirrel AI Learning in the instructional practice of teaching students according to their aptitude and promoting educational equality.

The first story is the daughter of Derek Li's driver, who only scored 25 points through various other types of tutoring. After receiving the Squirrel AI's adaptive learning engine's instruction and learning, she was able to be admitted to the best school within her own ability - the best Boeing aircraft maintenance major in vocational high school. It is just because of the personalization and pertinence of AI teachers that the fate of a so-called "Poor Student" in a traditional education has been changed.

The second story is Derek Li's own two twin boys, who are excellent students since childhood, but after using the MCM system, their overall personal skills has been greatly improved. "Education is not about the learning of knowledge points and test scores, quality education should be that after you have forgotten all the knowledge, your ability allows you to face any problems in your life," Derek Li concluded. In this year, his eldest son in the second grade of primary school has been able to make a speech to 2,500 audiences without any stage fright. This is the success of MCM, which enables the students with outstanding achievements to obtain a great improvement in their overall quality in addition to exam-oriented education.

The third story took place in Qingtai County, a poverty-stricken county in China. Squirrel AI Learning took two months to help the children in mountain areas by using the methods of "tracing the source" for student learning. Within two months, the achievement level of these rural children not only exceeded that of the children in the county, but also some children's level far exceeded the average level of students in Wuhan (the provincial city of Hubei, China). High-quality education resources are scarce in China, which is not only uneven in China's second, third and fourth tier cities, but also uneven in China's first-tier cities. If everyone has a most knowledgeable AI teacher around him/her, then education equity is not just a slogan, but every poor child can realize his/her own different dream completely.

Derek Li also said that the ultimate wish is to build Squirrel AI Learning into an omniscient and omnipotent teacher like Confucius + Da Vinci + Einstein, hoping to really use artificial intelligence to change the development history of human education.

Conclusion

Using technological innovation to leverage the personalized education market, Squirrel AI Learning is making every child have an AI super teacher that combines Confucius + Da Vinci + Einstein.

In the past five years, Squirrel AI Learning has opened more than 2,300 learning centers in more than 700 cities and counties in more than 20 provinces in China. With the business model of connecting online and offline, Squirrel AI Learning builds the core AI technology into a K12 full-course extracurricular tutoring intelligent system, which has taught nearly 2 million registered students accumulatively. It is believed that with the efforts of Squirrel AI Learning by Yixue Group, the future artificial intelligence technology can break through the limitations of traditional education mode and bring personalized education to every child.

SOURCE Squirrel AI Learning

See the rest here:

Squirrel AI Learning Attends the Web Summit to Talk About the Application and Breakthrough of Artificial Intelligence in the Field of Education -...

Artificial intelligence must be used with care – The Australian Financial Review

When AI goes into the machine learning space, it opens up a range of issues such as biases and privacy, she says. Boards have to be switched on to this and be able to ask the right questions.

According to Williams, a significant proportion of the challenges caused by AI usage within companies comes from the fact that the technology is far from transparent. Even the people who build it dont really know why it does what it does, she says. The board is critical. If it is successfulin understanding AI, developing strategies for it, and integrating it into mainstream business strategy, the payoff is huge.

Asked to nominate other technology-related issues occupying the minds board members, panel members pointed to a range including security and the ability to withstand cyber attacks.

Cyber security is really at the top of the list, says David Attenborough, managing director and chief executive at betting company Tabcorp. This is because any company is under permanent attack from different directions and you need to be protecting your customers, your networks and youremployees from those attacks.

The other major issue that keeps me awake at night is the resilience of networks because we have multiple systems supporting a massive retail network and a big digital network. On big days, such as the Melbourne Cup, if you have a system that goes down it is incredibly expensive and disruptiveand reputationally damaging.

David Attenborough, managing director and chief executive at betting company Tabcorp, says cyber security is top priority.Jesse Marlow

While information technology is a critical component for organisations of all sizes, the panellists also stressed that Australian businesses must be more than simply technology consumers.

To achieve long-term growth, it is vital to deploy new technologies to underpin sustained and far-reaching innovation.

The board wants too see a pipeline of ideas, says Stops. They want to know that the company is constantly thinking about new ways to do things and that the pipeline is constantly being filled and fed through.

She says innovation is not something that is unique to a particular group. Rather, it has to be a mindset and something that is in place right across an organisation.

The usual approach within a lot of companies has been to carve off a group and call it an innovation team, she says. Companies are now realising that this is not creating an innovation culture - its just putting some smart people in a corner.

Stops warns, however, that its important how innovation and new ideas are handled. Care needs to be taken that it doesnt get caught up in traditional multi layers of approval which can lead to a good idea dying before it can be fully developed.

The board should be keen to make sure there is a way in which those ideas can move through the organisation quite quickly, she says.

Also, there is a need to create a culture in which it is OK to fail. A lot of organisations spend money on innovation and new ideas and if they dont work people are shot and off they go. That is not what an innovation culture is all about.

See more here:

Artificial intelligence must be used with care - The Australian Financial Review

Accountability is the key to ethical artificial intelligence, experts say – ComputerWeekly.com

Artificial intelligence (AI) needs to be more accountable but ethical considerations are not keeping pace with the technologys rate of deployment, says a panel of experts.

This is partly due to the black box nature of AI, whereby its almost impossible to determine how or why an AI makes the decisions it does, as well as the complexities of creating an unbiased AI.

However, according to panelists at the Bristol Technology Showcase, transparency is not enough, with greater accountability being the key to solving many of the ethical issues surrounding AI.

Meaningful transparency doesnt simply follow from doing things like open sourcing the code, thats not sufficient, says Eamonn ONeill, professor of computer science at the University of Bath and director of the UKRI Centre for Doctoral Training in Accountable, Responsible and Transparent AI.

Code and deep learning networks can be opaque however hard you try to open them to inspection. How does seeing a million lines of code help you understand what your smartphones mid-ware is doing? Probably not a lot.

ONeill says that AI needs to be accompanied by a chain of accountability that holds the systems human operator responsible for the decisions of the algorithm.

We dont go to a company and say I cant tell if youve cooked the books because I cant access the neurons of your accountants nobody cares about accountants neurons, and we shouldnt care about the internal workings of AI neural networks either, he said.

Instead, ONeill says we should be focusing on outcomes.

John Buyers, chair of the AI and Ethics panel and a partner at law firm Osborne Clark, points to the example of Mount Sanai Hospital using an AI system called Deep Patient, which was made to trawl through thousands of electronic health records.

Over the course of doing that, Deep Patient became very adept at diagnosing, among other things, adult schizophrenia, which human doctors simply couldnt do, he says. They dont know how the system got to that, but it was of demonstrable public benefit.

Zara Nanu, CEO of human resources technology company Gapsquare, says: When we talk about bias, its bias in terms of the existing data we have that machines are looking at, but also the bias in algorithms we then apply to the data.

She gives the example of Amazon, which gathered a team of data scientists to develop an algorithm that would help it identify top engineers from around the world, who could then be recruited by the company.

All was going well except the machines had learnt to exclude women from the candidate pool, so it was down-scoring people who had woman on their CV, and it was actually scoring higher people who had words like lead or manage, she says.

Amazon came under scrutiny and tried to look how they could make it fairer, but they had to scrap the programme because they couldnt hand-on-heart say the algorithm wouldnt end up discriminating against another group.

Therefore, while accountability does not remove potential bias in the first place, it did make Amazon, as the entity operating the AI system, responsible for the negative effects or consequences of that bias.

However, Chris Ford, a Smith and Williamson partner responsible for a $270m AI investment fund, says theres a critical deficit in the way many corporate entities are approaching the deployment of the technology.

MIT Sloan and Boston Consultancy Group produced an interesting paper earlier this year surveying 3,000 companies globally, most of them outside North America, he says.

What was eye catching was that of those who responded, about half of them said they can see no strategic risk in the deployment of AI platforms within their business, and I find that quite extraordinary.

Ford says this is partly due to a fear of missing out on the latest technological trends, but also because there is not enough emphasis on ethics in education related to AI.

He notes the example of Stuart Russells book,Artificial intelligence: A modern approach, which has been through numerous iterations and is one of the most popular course texts in the world.

That textbook in its most recent form is up to 1,100 pages, he says. Its extraordinarily comprehensive, but the section that deals with ethics is covered in the first 36 pages.

So theres an issue on emphasis here, both in respect to the academic training of data scientists but also what theyre expected to engage with in the commercial world when they leave education.

In terms of bias, the panelists also note that what is socially normal or acceptable is biased in itself.

The question then becomes whose societal norms are we talking about? We are already seeing significant differences and perspectives in the adoption of AI in different parts of the world, says Ford.

Buyers summarised that a lack of bias is not the introduction of objectivity, but the application of subjectivity in accordance with societal norms, so its incredibly difficult.

The overall argument is that AI, like humans, will always be biased to a point of view, meaning transparency will only go so far in solving the ethical issues around the deployment of AI.

Using AI in contrast to humans can facilitate transparency we can fully document the software engineering process, the data, the training, the system performance these measures can be used to support systematic inspection, and therefore transparency and regulation, but accountability and responsibility must stay with the humans, says ONeill.

The Bristol Technology Showcase was held in November 2019, and focused on the impact of emerging technologies on both businesses and wider society.

Read more:

Accountability is the key to ethical artificial intelligence, experts say - ComputerWeekly.com

Beethovens unfinished tenth symphony to be completed by artificial intelligence – Classic FM

16 December 2019, 16:31

Beethovens unfinished symphony is set to be completed by artificial intelligence, in the run-up to celebrations around the 250th anniversary of the composers birth.

A computer is set to complete Beethovens unfinished tenth symphony, in the most ambitious project of its kind.

Artificial intelligence has recently been used to complete Schuberts Unfinished Symphony No. 8, as well as to attempt to match the playing of revered 20th-century pianist, Glenn Gould.

Beethoven famously wrote nine symphonies (you can read more here about the Curse of the Ninth). But alongside his Symphony No. 9, which contains the Ode to Joy, there is evidence that he began writing a tenth.

Unfortunately, when the German composer died in 1827, he left only drafts and notes of the composition.

Read more: What is the Curse of the Ninth and does it really exist? >

A team of musicologists and programmers have been training the artificial intelligence, by playing snippets of Beethovens unfinished Symphony No. 10, as well as sections from other works like his Eroica Symphony. The AI is then left to improvise the rest.

Matthias Roeder, project leader and director of the Herbert von Karajan institute, told Frankfurter Allgemeine Sonntagszeitung: No machine has been able to do this for so long. This is unique.

The quality of genius cannot be fully replicated, still less if youre dealing with Beethovens late period, said Christine Siegert, head of the Beethoven Archive in Bonn and one of those managing the project.

I think the projects goal should be to integrate Beethovens existing musical fragments into a coherent musical flow, she told the German broadcaster Deutshe Welle. Thats difficult enough, and if this project can manage that, it will be an incredible accomplishment.

Read more: AI to compose classical music live in concert with over 100 musicians >

It remains to be seen and heard whether the new completed composition will sound anything like Beethovens own compositions. But Mr Roeder has said the algorithm is making positive progress.

Read more: Googles piano gadget means ANYONE can improvise classical music >

The algorithm is unpredictable, it surprises us every day. It is like a small child who is exploring the world of Beethoven.

But it keeps going and, at some point, the system really surprises you. And that happened the first time a few weeks ago. Were pleased that its making such big strides.

There will also, reliable sources have confirmed, be some human involvement in the project. Although the computer will write the music, a living composer will orchestrate it for playing.

The results of the experiment will be premiered by a full symphony orchestra, in a public performance in Bonn Beethovens birthplace in Germany on 28 April 2020.

View original post here:

Beethovens unfinished tenth symphony to be completed by artificial intelligence - Classic FM

Artificial Intelligence Isn’t an Arms Race With China, and the United States Shouldn’t Treat It Like One – Foreign Policy

At the last Democratic presidential debate, the technologist candidate Andrew Yang emphatically declared that were in the process of potentially losing the AI arms race to China right now. As evidence, he cited Beijings access to vast amounts of data and its substantial investment in research and development for artificial intelligence. Yang and othersmost notably the National Security Commission on Artificial Intelligence, whichreleased its interim report to Congress last monthare right about Chinas current strengths in developing AI and the serious concerns this should raise in the United States. But framing advances in the field as an arms race is both wrong and counterproductive. Instead, while being clear-eyed about Chinas aggressive pursuit of AI for military use and human rights-abusing technological surveillance, the United States and China must find their way to dialogue and cooperation on AI. A practical, nuanced mix of competition and cooperation would better serve U.S. interests than an arms race approach.

AI is one of the great collective Rorschach tests of our times. Like any topic that captures the popular imagination but is poorly understood, it soaks up the zeitgeist like a sponge.

Its no surprise, then, that as the idea of great-power competition has reengulfed the halls of power, AI has gotten caught up in therace narrative.ChinaAmericans are toldis barreling ahead on AI, so much so that the United States willsoon be lagging far behind. Like the fears that surrounded Japans economic rise in the 1980s or the Soviet Union in the 1950s and 1960s, anxiety around technological dominance are really proxies for U.S. insecurity about its own economic, military, and political prowess.

Yet as technology, AI does not naturally lend itself to this framework and is not a strategic weapon.Despite claims that AI will change nearly everything about warfare, and notwithstanding its ultimate potential, for the foreseeable future AI will likely only incrementally improve existing platforms, unmanned systems such as drones, and battlefield awareness. Ensuring that the United States outpaces its rivals and adversaries in the military and intelligence applications of AI is important and worth the investment. But such applications are just one element of AI development and should not dominate the United States entire approach.

The arms race framework raises the question of what one is racing toward. Machine learning, the AI subfield of greatest recent promise, is a vast toolbox of capabilities and statistical methodsa bundle of technologies that do everything from recognizing objects in images to generating symphonies. It is far from clear what exactly would constitute winning in AI or even being better at a national level.

The National Security Commission is absolutely right that developments in AI cannot be separated from the emerging strategic competition with China and developments in the broader geopolitical landscape. U.S. leadership in AI is imperative. Leading, however, does not mean winning. Maintaining superiority in the field of AI is necessary but not sufficient. True global leadership requires proactively shaping the rules and norms for AI applications, ensuring that the benefits of AI are distributed worldwidebroadly and equitablyand stabilizing great-power competition that could lead to catastrophic conflict.

That requires U.S. cooperation with friends and even rivals such as China. Here, we believe that important aspects of the National Security Commission on AIs recent report have gotten too little attention.

First, as the commission notes, official U.S. dialogue with China and Russia on the use of AI in nuclear command and control, AIs military applications, and AI safety could enhance strategic stability, like arms control talks during the Cold War. Second, collaboration on AI applications by Chinese and American researchers, engineers, and companies, as well as bilateral dialogue on rules and standards for AI development, could help buffer the competitive elements of anincreasingly tense U.S.-Chinese relationship.

Finally, there is a much higher bar to sharing core AI inputs such as data and software and building AI for shared global challenges if the United States sees AI as an arms race. Although commercial and military applications for AI are increasing, applications for societal good (addressing climate change,improving disaster response,boosting resilience, preventing the emergence of pandemics, managing armed conflict, andassisting in human development)are lagging. These would benefit from multilateral collaboration and investment, led by the United States and China.

The AI arms race narrative makes for great headlines, buttheunbridled U.S.-Chinese competition it implies risks pushing the United States and the world down a dangerous path. Washington and Beijing should recognize the fallacy of a generalized AI arms race in which there are no winners. Instead, both should lead by leveraging the technology to spur dialogue between them and foster practical collaboration to counter the many forces driving them apartbenefiting the whole world in the process.

See original here:

Artificial Intelligence Isn't an Arms Race With China, and the United States Shouldn't Treat It Like One - Foreign Policy

Schlumberger inks deal to expand artificial intelligence in the oil field – Chron

Oilfield service giant Schlumberger has inked a deal to expand its offerings of artificial intelligence products and services.

Oilfield service giant Schlumberger has inked a deal to expand its offerings of artificial intelligence products and services.

Oilfield service giant Schlumberger has inked a deal to expand its offerings of artificial intelligence products and services.

Oilfield service giant Schlumberger has inked a deal to expand its offerings of artificial intelligence products and services.

Schlumberger inks deal to expand artificial intelligence in the oil field

Oilfield service giant Schlumberger has inked a deal to expand the use of artificial intelligence technology in the oil patch.

In a statement, Schlumberger announced it had entered into an agreement the New York software company Dataiku.

Under the agreement, the two companies will work together to develop artificial intelligence products and services for Schlumberger's exploration and production customers.

Service Sector: Baker Hughes enters deal to boost AI in the oil field

With U.S. crude oil prices stuck in the mid-$50 per barrel range, many energy companies are adopting digital tools to increase efficiency and lower costs.

The deal between Schlumberger and Dataiku comes less than a month after oilfield service company rival Baker Hughes entered into a similar deal withtech giant Microsoft and Silicon Valley artificial intelligence company C3.ai.

Fuel Fix: Get daily energy news headlines in your inbox

Headquartered in Paris with its principal offices in Houston, Schlumberger is the largest oilfield service company in the world with more than 100,000 employees in 85 nations.

The company posted a $2.2 billion profit on $32.8 billion of revenue in 2018.

Read the latest oil and gas news from HoustonChronicle.com

See the original post here:

Schlumberger inks deal to expand artificial intelligence in the oil field - Chron

Boschs A.I.-powered tech could prevent accidents by staring at you – Digital Trends

Most cars sold new in 2019 are equipped with technology that lets them scope out the road ahead. They can brake when a pedestrian crosses the road in front of them, for example, or accelerate on their own when a semi passing a slower vehicle moves back into the right lane. Now, Bosch is developing artificial intelligence-powered technology that opens new horizons by teaching cars how to see what and who is riding in them. It sounds creepy, but it could save your life.

Boschs system primarily relies on a small camera integrated into the steering wheel. Facial-recognition technology tells it whether the driver is falling asleep, looking down at a funny video on a phone, yelling at the rear passengers, or otherwise distracted. Artificial intelligence teaches it how to recognize many different situations. The system then takes the most appropriate action. It tries to wake you up if youre dozing off, and it reminds you to look ahead if your eyes are elsewhere. Alternatively, it can recommend a break from driving and, in extreme cases, slow down the car to prevent a collision.

Driver awareness monitoring systems are already on the market in 2019. Cadillacs Super Cruise technology notably relies on one to tell whether the driver is paying attention, but Boschs solution is different because its being trained to recognize a wide variety of scenarios via image-processing algorithms. This approach is similar to how the German firm teaches autonomous cars to interpret objects around them. Real-world footage of drivers falling asleep (hopefully on test tracks, and not on I-80) shows the software precisely what happens before the driver calls it a night.

This technology can also keep an eye on your passengers. Thanks to a camera embedded in the rearview mirror, the system can keep an eye on the people riding in the back, and warn the driver if one isnt wearing a seat belt. It can even detect the position a given passenger is sitting in, and adjust the airbags and seat belt parameters accordingly. Safety systems are designed to work when someone is sitting facing forward and upright, but thats not always the case. If youre slouching in the back seat (admit it, it happens), the last thing you want is the side airbag to become a throat airbag.

Smartphone connectivity plays a role here, too. The same mirror-mounted camera recognizes when a child is left in the back seat, and it automatically sends an alert to the drivers smartphone. It notifies the relevant emergency services if the driver doesnt come back after a predetermined amount of time.

Looking further ahead, when autonomous technology finally merges into the mainstream, this tech could tell the car if the driver is ready to take over. Theres no sense in asking someone to drive if theyre asleep, or if theyve hopped over the drivers seat to chill on the rear bench. Autonomy will come in increments, so its not too far-fetched to imagine a car capable of driving itself at freeway speeds, when the lane markings are clear, but not in crowded urban centers.

The footage captured by the cameras cant be used against you or yours, according to Bosch, because its neither saved nor shared with third parties. Still, its a feature that will certainly raise more than a few concerns about privacy.

The technology could reach production in 2022, when European Union officials will make driver-monitoring technology mandatory in all new cars. Lawmakers hope the feature will save 25,000 lives and prevent at least 140,000 severe injuries by 2038. Theres no word yet on when (or whether) it will come to the United States. Bosch doesnt make cars it never has so its up to automakers to decide whether its worth putting in their new models.

See more here:

Boschs A.I.-powered tech could prevent accidents by staring at you - Digital Trends

Tip: Seven recommendations for introducing artificial intelligence to your newsroom – Journalism.co.uk

Artificial intelligence is now commonly used in journalism for anything from combing through large datasets to writing stories.

To help you prepare for the future, the Journalism AI team at Polis, London School of Economics and Political Science (LSE), put together a training module seven things to consider before adopting AI in your news organisation.

"Keep in mind that this is not a manual for implementation," writes professor Charlie Beckett who leadsJournalism AI.

"The recommendations will help you reflect on your newsroom AI-readiness but they wont tell you how to do design a strategy. We link to more resources that might help you with that and we hope to produce more training resources ourselves in the near future."

For more insights into the Journalism AI report, you can watch this three-minute video, as well as Charlie Becketts presentation of the report at its launch event.

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).

Visit link:

Tip: Seven recommendations for introducing artificial intelligence to your newsroom - Journalism.co.uk

Joint Artificial Intelligence Center Director tells Naval War College audience to ‘Dive In’ on AI – What’sUpNewp

Were on a mission to provide quality local and independent community news, information, and journalism.

Saying the most important thing to do is just dive in, Lt. Gen. Jack Shanahan, director of the Department of Defense Joint Artificial Intelligence Center, talked to U.S. Naval War College students and faculty on Dec. 12 about the challenges and opportunities of fielding artificial intelligence technology in the U.S. military.

On one side of the emerging tech equation, we need far more national security professionals who understand what this technology can do or, equally important, what it cannot do, Shanahan told his audience in the colleges Mahan Reading Room.

On the other side of the equation, we desperately need more people who grasp the societal implications of new technology, who are capable of looking at this new data-driven world through geopolitical, international relations, humanitarian and even philosophical lenses, he said.

At the Joint AI Center, established in 2018 at the Pentagon, Shanahan is responsible for accelerating the Defense Departments adoption and integration of AI in order to quickly affect national security operations at the largest possible scale.

He told the Naval War College audience that the most valuable contribution of AI to U.S. defense will be how it helps human beings to make better, faster and more precise decisions, especially during high-consequence operations.

AI is like electricity or computers. Like electricity, AI is a transformative, general-purpose enabling technology capable of being used for good or for evil but not a thing unto itself. It is not a weapons system, a gadget or a widget, said the Air Force general whose prior position was director of Project Maven, a Defense Department program using machine learning to autonomously extract objects of interest from photos or video.

If I have learned anything over the past three years, its that theres a chasm between thinking, writing and talking about AI, and doing it, Shanahan said.

There is no substitute whatsoever for rolling up ones sleeves and diving in an AI project, he said.

Shanahan said adapting the Department of Defense to the AI world will be a multigenerational journey, requiring both urgency and patience.

He compared this moment in history to the period between World War I and World War II, when new ideas led to an explosion not just in military innovation but in technology advancement that eventually helped create Silicon Valley.

Now, the private sector is leading the way on AI, which leaves the Defense Department playing catch-up, Shanahan said. However, he added that he sees the U.S. militarys efforts running at a tempo comparable to commercial industry in five years from now.

China, he said, sees AI as a way to leapfrog over the current U.S. defense advantages.

The Chinese military has identified intelligent-ization as a military revolution on par with mechanization from the internal combustion engine, Shanahan said. They are sprinting to incorporate AI technology in all aspects of their military, and the Chinese commercial industry is more than willing to help.

After the speech, in an interview, Shanahan said AI isnt an arms race, but it is a strategic competition.

Regardless of what China does or does not do in AI, we have to accelerate our adoption of it. Its that important to our future, he said.

For example, Shanahan said, in 15 years, what if China has a fully AI-enabled military force, and the United States does not.

To me that scenario brings us an unacceptably high risk of failure because of the speed of the fight in the future, which we have not been prepared for as a result of fighting in the Middle East for 20-some years, he said. That, to me, is the best stark example of why we have to move in this direction.

Looking at the importance of military higher education in the effort, Shanahan said the role of institutions such as the Naval War College is to make a place for the militarys rising stars to think about new ways to harness AI.

What you are here to do is think strategy, the strategic and societal implications of using emerging and disruptive technology, he said.

You will find somebody comes out of here that has a spark, a lightbulb moment, that wants to go back and try this idea they developed while they were here, said Shanahan, who is a 1996 graduate of the Naval War Colleges College of Naval Command and Staff.

The Joint AI Center director said another role for military higher-education institutions is research on practical applications of AI.

Its the thinking about grand strategy and technology together that may be as important to the future of operating concepts as anything else, he said.

Source: USNWC Public Affairs Office | Jeanette Steele, U.S. Naval War College Public Affairs

We Cant Do This Without You

Sign up for Whats Up Newps FREE daily newsletter, youll never miss a thing from us! Just enter your email address below!

Here is the original post:

Joint Artificial Intelligence Center Director tells Naval War College audience to 'Dive In' on AI - What'sUpNewp