Artificial Intelligence Isn’t an Arms Race With China, and the United States Shouldn’t Treat It Like One – Foreign Policy

At the last Democratic presidential debate, the technologist candidate Andrew Yang emphatically declared that were in the process of potentially losing the AI arms race to China right now. As evidence, he cited Beijings access to vast amounts of data and its substantial investment in research and development for artificial intelligence. Yang and othersmost notably the National Security Commission on Artificial Intelligence, whichreleased its interim report to Congress last monthare right about Chinas current strengths in developing AI and the serious concerns this should raise in the United States. But framing advances in the field as an arms race is both wrong and counterproductive. Instead, while being clear-eyed about Chinas aggressive pursuit of AI for military use and human rights-abusing technological surveillance, the United States and China must find their way to dialogue and cooperation on AI. A practical, nuanced mix of competition and cooperation would better serve U.S. interests than an arms race approach.

AI is one of the great collective Rorschach tests of our times. Like any topic that captures the popular imagination but is poorly understood, it soaks up the zeitgeist like a sponge.

Its no surprise, then, that as the idea of great-power competition has reengulfed the halls of power, AI has gotten caught up in therace narrative.ChinaAmericans are toldis barreling ahead on AI, so much so that the United States willsoon be lagging far behind. Like the fears that surrounded Japans economic rise in the 1980s or the Soviet Union in the 1950s and 1960s, anxiety around technological dominance are really proxies for U.S. insecurity about its own economic, military, and political prowess.

Yet as technology, AI does not naturally lend itself to this framework and is not a strategic weapon.Despite claims that AI will change nearly everything about warfare, and notwithstanding its ultimate potential, for the foreseeable future AI will likely only incrementally improve existing platforms, unmanned systems such as drones, and battlefield awareness. Ensuring that the United States outpaces its rivals and adversaries in the military and intelligence applications of AI is important and worth the investment. But such applications are just one element of AI development and should not dominate the United States entire approach.

The arms race framework raises the question of what one is racing toward. Machine learning, the AI subfield of greatest recent promise, is a vast toolbox of capabilities and statistical methodsa bundle of technologies that do everything from recognizing objects in images to generating symphonies. It is far from clear what exactly would constitute winning in AI or even being better at a national level.

The National Security Commission is absolutely right that developments in AI cannot be separated from the emerging strategic competition with China and developments in the broader geopolitical landscape. U.S. leadership in AI is imperative. Leading, however, does not mean winning. Maintaining superiority in the field of AI is necessary but not sufficient. True global leadership requires proactively shaping the rules and norms for AI applications, ensuring that the benefits of AI are distributed worldwidebroadly and equitablyand stabilizing great-power competition that could lead to catastrophic conflict.

That requires U.S. cooperation with friends and even rivals such as China. Here, we believe that important aspects of the National Security Commission on AIs recent report have gotten too little attention.

First, as the commission notes, official U.S. dialogue with China and Russia on the use of AI in nuclear command and control, AIs military applications, and AI safety could enhance strategic stability, like arms control talks during the Cold War. Second, collaboration on AI applications by Chinese and American researchers, engineers, and companies, as well as bilateral dialogue on rules and standards for AI development, could help buffer the competitive elements of anincreasingly tense U.S.-Chinese relationship.

Finally, there is a much higher bar to sharing core AI inputs such as data and software and building AI for shared global challenges if the United States sees AI as an arms race. Although commercial and military applications for AI are increasing, applications for societal good (addressing climate change,improving disaster response,boosting resilience, preventing the emergence of pandemics, managing armed conflict, andassisting in human development)are lagging. These would benefit from multilateral collaboration and investment, led by the United States and China.

The AI arms race narrative makes for great headlines, buttheunbridled U.S.-Chinese competition it implies risks pushing the United States and the world down a dangerous path. Washington and Beijing should recognize the fallacy of a generalized AI arms race in which there are no winners. Instead, both should lead by leveraging the technology to spur dialogue between them and foster practical collaboration to counter the many forces driving them apartbenefiting the whole world in the process.

See original here:

Artificial Intelligence Isn't an Arms Race With China, and the United States Shouldn't Treat It Like One - Foreign Policy

Schlumberger inks deal to expand artificial intelligence in the oil field – Chron

Oilfield service giant Schlumberger has inked a deal to expand its offerings of artificial intelligence products and services.

Oilfield service giant Schlumberger has inked a deal to expand its offerings of artificial intelligence products and services.

Oilfield service giant Schlumberger has inked a deal to expand its offerings of artificial intelligence products and services.

Oilfield service giant Schlumberger has inked a deal to expand its offerings of artificial intelligence products and services.

Schlumberger inks deal to expand artificial intelligence in the oil field

Oilfield service giant Schlumberger has inked a deal to expand the use of artificial intelligence technology in the oil patch.

In a statement, Schlumberger announced it had entered into an agreement the New York software company Dataiku.

Under the agreement, the two companies will work together to develop artificial intelligence products and services for Schlumberger's exploration and production customers.

Service Sector: Baker Hughes enters deal to boost AI in the oil field

With U.S. crude oil prices stuck in the mid-$50 per barrel range, many energy companies are adopting digital tools to increase efficiency and lower costs.

The deal between Schlumberger and Dataiku comes less than a month after oilfield service company rival Baker Hughes entered into a similar deal withtech giant Microsoft and Silicon Valley artificial intelligence company C3.ai.

Fuel Fix: Get daily energy news headlines in your inbox

Headquartered in Paris with its principal offices in Houston, Schlumberger is the largest oilfield service company in the world with more than 100,000 employees in 85 nations.

The company posted a $2.2 billion profit on $32.8 billion of revenue in 2018.

Read the latest oil and gas news from HoustonChronicle.com

See the original post here:

Schlumberger inks deal to expand artificial intelligence in the oil field - Chron

Boschs A.I.-powered tech could prevent accidents by staring at you – Digital Trends

Most cars sold new in 2019 are equipped with technology that lets them scope out the road ahead. They can brake when a pedestrian crosses the road in front of them, for example, or accelerate on their own when a semi passing a slower vehicle moves back into the right lane. Now, Bosch is developing artificial intelligence-powered technology that opens new horizons by teaching cars how to see what and who is riding in them. It sounds creepy, but it could save your life.

Boschs system primarily relies on a small camera integrated into the steering wheel. Facial-recognition technology tells it whether the driver is falling asleep, looking down at a funny video on a phone, yelling at the rear passengers, or otherwise distracted. Artificial intelligence teaches it how to recognize many different situations. The system then takes the most appropriate action. It tries to wake you up if youre dozing off, and it reminds you to look ahead if your eyes are elsewhere. Alternatively, it can recommend a break from driving and, in extreme cases, slow down the car to prevent a collision.

Driver awareness monitoring systems are already on the market in 2019. Cadillacs Super Cruise technology notably relies on one to tell whether the driver is paying attention, but Boschs solution is different because its being trained to recognize a wide variety of scenarios via image-processing algorithms. This approach is similar to how the German firm teaches autonomous cars to interpret objects around them. Real-world footage of drivers falling asleep (hopefully on test tracks, and not on I-80) shows the software precisely what happens before the driver calls it a night.

This technology can also keep an eye on your passengers. Thanks to a camera embedded in the rearview mirror, the system can keep an eye on the people riding in the back, and warn the driver if one isnt wearing a seat belt. It can even detect the position a given passenger is sitting in, and adjust the airbags and seat belt parameters accordingly. Safety systems are designed to work when someone is sitting facing forward and upright, but thats not always the case. If youre slouching in the back seat (admit it, it happens), the last thing you want is the side airbag to become a throat airbag.

Smartphone connectivity plays a role here, too. The same mirror-mounted camera recognizes when a child is left in the back seat, and it automatically sends an alert to the drivers smartphone. It notifies the relevant emergency services if the driver doesnt come back after a predetermined amount of time.

Looking further ahead, when autonomous technology finally merges into the mainstream, this tech could tell the car if the driver is ready to take over. Theres no sense in asking someone to drive if theyre asleep, or if theyve hopped over the drivers seat to chill on the rear bench. Autonomy will come in increments, so its not too far-fetched to imagine a car capable of driving itself at freeway speeds, when the lane markings are clear, but not in crowded urban centers.

The footage captured by the cameras cant be used against you or yours, according to Bosch, because its neither saved nor shared with third parties. Still, its a feature that will certainly raise more than a few concerns about privacy.

The technology could reach production in 2022, when European Union officials will make driver-monitoring technology mandatory in all new cars. Lawmakers hope the feature will save 25,000 lives and prevent at least 140,000 severe injuries by 2038. Theres no word yet on when (or whether) it will come to the United States. Bosch doesnt make cars it never has so its up to automakers to decide whether its worth putting in their new models.

See more here:

Boschs A.I.-powered tech could prevent accidents by staring at you - Digital Trends

Tip: Seven recommendations for introducing artificial intelligence to your newsroom – Journalism.co.uk

Artificial intelligence is now commonly used in journalism for anything from combing through large datasets to writing stories.

To help you prepare for the future, the Journalism AI team at Polis, London School of Economics and Political Science (LSE), put together a training module seven things to consider before adopting AI in your news organisation.

"Keep in mind that this is not a manual for implementation," writes professor Charlie Beckett who leadsJournalism AI.

"The recommendations will help you reflect on your newsroom AI-readiness but they wont tell you how to do design a strategy. We link to more resources that might help you with that and we hope to produce more training resources ourselves in the near future."

For more insights into the Journalism AI report, you can watch this three-minute video, as well as Charlie Becketts presentation of the report at its launch event.

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).

Visit link:

Tip: Seven recommendations for introducing artificial intelligence to your newsroom - Journalism.co.uk

Joint Artificial Intelligence Center Director tells Naval War College audience to ‘Dive In’ on AI – What’sUpNewp

Were on a mission to provide quality local and independent community news, information, and journalism.

Saying the most important thing to do is just dive in, Lt. Gen. Jack Shanahan, director of the Department of Defense Joint Artificial Intelligence Center, talked to U.S. Naval War College students and faculty on Dec. 12 about the challenges and opportunities of fielding artificial intelligence technology in the U.S. military.

On one side of the emerging tech equation, we need far more national security professionals who understand what this technology can do or, equally important, what it cannot do, Shanahan told his audience in the colleges Mahan Reading Room.

On the other side of the equation, we desperately need more people who grasp the societal implications of new technology, who are capable of looking at this new data-driven world through geopolitical, international relations, humanitarian and even philosophical lenses, he said.

At the Joint AI Center, established in 2018 at the Pentagon, Shanahan is responsible for accelerating the Defense Departments adoption and integration of AI in order to quickly affect national security operations at the largest possible scale.

He told the Naval War College audience that the most valuable contribution of AI to U.S. defense will be how it helps human beings to make better, faster and more precise decisions, especially during high-consequence operations.

AI is like electricity or computers. Like electricity, AI is a transformative, general-purpose enabling technology capable of being used for good or for evil but not a thing unto itself. It is not a weapons system, a gadget or a widget, said the Air Force general whose prior position was director of Project Maven, a Defense Department program using machine learning to autonomously extract objects of interest from photos or video.

If I have learned anything over the past three years, its that theres a chasm between thinking, writing and talking about AI, and doing it, Shanahan said.

There is no substitute whatsoever for rolling up ones sleeves and diving in an AI project, he said.

Shanahan said adapting the Department of Defense to the AI world will be a multigenerational journey, requiring both urgency and patience.

He compared this moment in history to the period between World War I and World War II, when new ideas led to an explosion not just in military innovation but in technology advancement that eventually helped create Silicon Valley.

Now, the private sector is leading the way on AI, which leaves the Defense Department playing catch-up, Shanahan said. However, he added that he sees the U.S. militarys efforts running at a tempo comparable to commercial industry in five years from now.

China, he said, sees AI as a way to leapfrog over the current U.S. defense advantages.

The Chinese military has identified intelligent-ization as a military revolution on par with mechanization from the internal combustion engine, Shanahan said. They are sprinting to incorporate AI technology in all aspects of their military, and the Chinese commercial industry is more than willing to help.

After the speech, in an interview, Shanahan said AI isnt an arms race, but it is a strategic competition.

Regardless of what China does or does not do in AI, we have to accelerate our adoption of it. Its that important to our future, he said.

For example, Shanahan said, in 15 years, what if China has a fully AI-enabled military force, and the United States does not.

To me that scenario brings us an unacceptably high risk of failure because of the speed of the fight in the future, which we have not been prepared for as a result of fighting in the Middle East for 20-some years, he said. That, to me, is the best stark example of why we have to move in this direction.

Looking at the importance of military higher education in the effort, Shanahan said the role of institutions such as the Naval War College is to make a place for the militarys rising stars to think about new ways to harness AI.

What you are here to do is think strategy, the strategic and societal implications of using emerging and disruptive technology, he said.

You will find somebody comes out of here that has a spark, a lightbulb moment, that wants to go back and try this idea they developed while they were here, said Shanahan, who is a 1996 graduate of the Naval War Colleges College of Naval Command and Staff.

The Joint AI Center director said another role for military higher-education institutions is research on practical applications of AI.

Its the thinking about grand strategy and technology together that may be as important to the future of operating concepts as anything else, he said.

Source: USNWC Public Affairs Office | Jeanette Steele, U.S. Naval War College Public Affairs

We Cant Do This Without You

Sign up for Whats Up Newps FREE daily newsletter, youll never miss a thing from us! Just enter your email address below!

Here is the original post:

Joint Artificial Intelligence Center Director tells Naval War College audience to 'Dive In' on AI - What'sUpNewp

It’s artificial intelligence to the rescue (and response and recovery) – GreenBiz

This article is adapted from GreenBiz's weekly newsletter, VERGE Weekly, running Wednesdays. Subscribe here.

As global losses rack up from climate change-exacerbated natural disasters from voracious wildfires to ferocious hurricanes communities are scrambling to prepare (and to hedge their losses).

While information technologies such as machine learning and predictive analytics may not be able to prevent these catastrophes outright, they could help communities be better prepared to handle the aftermath. Thats the spirit behind a unique collaboration between Chicago-based technology services company Exigent and the Schulich School of Business at York University in Toronto, one that aims to create a more cost-effective and efficient marketplace for disaster relief and emergency response services.

The idea is to help state and provincial governments collectively build a more centralized inventory of relief supplies and other humanitarian items based on the data from a particular wildfire or hurricane season.

Rather than buying supplies locally based on the predictions something many small towns in fire-prone areas can ill-afford a community would buy "options" for these services in the marketplace being developed through this partnership. If the town ultimately doesn't need the items, it could "trade" them to another region that does have a need, either in the same state or another location. In effect, towns across a state or region or even country could arrange for protection, without having to make that investment outright.

"Why are we not packing those crates in March, because they are going to go somewhere?" asked Exigent CEO David Holme, referring to the current system.

The most obvious reason is that its expensive: Relief suppliers won't invest in making items unless they have certainty of orders. The intention of the Exigent-Schulich project is to move from a system that is 100 percent reactive, and consequently very slow, to one that is at least 50 percent predictive that can deliver help far more quickly, he said.

To do this, Exigent is working with AI students at Schulich to use information about a communitys demographics, geology and topography, and existing infrastructure to predict what areas affect could need: how many first-aid kits to treat local citizens or how many cement bags to rebuild structures or how many temporary housing units for residents and relief workers. All sorts of data is being consulted, from census information to historical weather data to forward-looking models for wind direction, temperature and humidity, noted Murat Kristal, program director for the Schulich masters program that is involved in the project.

Governments and decision makers are acting in a reactive way right now.

The initial focus of the joint Exigent-Schulich work is on gathering data related to wildfires in Canada and the United States. The prevalence of Californias fires captures many headlines: the insurance losses from the Camp, Hill and Woolsey fires in November 2017 have topped $12 billion. Although it gets far less attention, Texas is also highly prone to wildfires and 80 percent of them are within two miles of a community. To the north, Canadian provinces such as Alberta and Ontario are also at risk: There are an average of 6,000 fires in Canada annually.

Exigent estimates that by deploying supplies to affected regions more quickly, the platform its developing a pilot version is due in June might cut recovery costs by 20 percent and drive down premiums in at-risk regions. "The municipalities and insurers can collaboratively benefit," Holme said. "The more Ive studied the idea, the more useful it seems."

Read the rest here:

It's artificial intelligence to the rescue (and response and recovery) - GreenBiz

8 Artificial Intelligence, Machine Learning and Cloud Predictions To Watch in 2020 – Irish Tech News

Artificial Intelligence, Machine Learning and Cloud Predictions by Jerry Kurata and Barry Luijregts, Pluralsight. In this article, they share their predictions for the ways that AI, ML and the cloud will be used differently in 2020 and beyond.

This decade has seen a seismic shift in the role of technology, at work and at home. Just ten years ago, technology was a specialist discipline in the workplace, governed by experts. At home things were relatively limited and tech was more in the background. Today technology is at the centre of how everyone works, lives, learns and plays. This prominence is shifting the way we think about, use, interact with and the expectations we have for technology, and we wanted to share some reflections and predictions for the year ahead.

AI Jerry Kurata

Increased User Expectations

As users experience assistants like Alexa and Siri, and cars that drive themselves, the expectations of what applications can do has greatly increased. And these expectations will continue to grow in 2020 and beyond. Users expect a stores website or app to be able to identify a picture of an item and guide them to where the item and accessories for the item are in the store. And these expectations extend to consumers of the information such as a restaurant owner.

This owner should rightfully expect the website built for them to help with their business by keeping their site fresh. The site should drive business to the restaurant by determining the sentiment of reviews, and automatically display the most positive recent reviews to the restaurants front page.

AI/ML will go small scale

We can expect to see more AI/ML on smaller platforms from phones to IoT devices. The hardware needed to run AI/ML solutions is shrinking in size and power requirements, making it possible to bring the power and intelligence of AI/ML to smaller and smaller devices. This is allowing the creation of new classes of intelligent applications and devices that can be deployed everywhere, including:

AI/ML will expand the cloud

In the race for the cloud market, the major providers (Amazon AWS, Microsoft Azure, Google Cloud) are doubling down on their AI/ML offerings. Prices are decreasing, and the number and power of services available in the cloud are ever increasing. In addition, the number of low cost or free cloud-based facilities and compute engines for AI/ML developers and researchers are increasing.

This removes much of the hardware barriers that prevented developers in smaller companies or locales with limited infrastructure from building advanced ML models and AI applications.

AI/ML will become easier to use

As AI/ML is getting more powerful, it is becoming easier to use. Pre-trained models that perform tasks such as language translation, sentiment classification, object detection, and others are becoming readily available. And with minimal coding, these can be incorporated into applications and retrained to solve specific problems. This allows creating a translator from English to Swahili quickly by utilizing the power of a pre-trained translation model and passing it sets of equivalent phrases in the two languages.

There will be greater need for AI/ML education

To keep up with these trends, education in AI and ML is critical. And the need for education includes people developing AI/ML applications, and also C-Suite execs, product managers, and other management personnel. All must understand what AI and ML technologies can do, and where its limits exist. But of course, the level of AI/ML knowledge required is even greater for people involved with creating products.

Regardless of whether they are a web developer, database specialist, or infrastructure analyst, they need to know how to incorporate AI and ML into the products and services they create.

Cloud Barry Luijbregts

Cloud investment will increase

In 2019, more companies than ever adopted cloud computing and increased their investment in the cloud. In 2020, this trend will likely continue. More companies will see the benefits of the cloud and realize that they could never get the same security, performance and availability gains themselves. This new adoption, together with increased economies of scale, will lower prices for cloud storage and services even further.

Cloud will provide easier to use services

Additionally, 2020 will be the year where the major cloud providers will offer more and easier-to-use AI services. These will provide drag-and-drop modelling features and more, out-of-the-box, pre-trained data models to make adoption and usage of AI available for the average developers.

Cloud will tackle more specific problems

On top of that, in 2020, the major cloud vendors will likely start providing solutions that tackle specific problems, like areas of climate change and self-driving vehicles. These new solutions can be implemented without much technical expertise and will have a major impact in problem areas.

Looking further ahead

As we enter a new decade, we are on the cusp of another revolution, as we take our relationship with technology to the next level. Companies will continue to devote ever larger budgets to deploying the latest developments, as AI, machine learning and the cloud become integral to the successful running of any business, no matter the sector.

There have been murmurings that this increase in investment will have an impact on jobs. However, if the right technology is rolled out in the right way, it will only ever complement the human skillset, as opposed to replacing it. We have a crucial role to play in the overall process and our relationship with technology must always remain as intended; a partnership.

Jerry Kurata and Barry Luijregts are expert authors at Pluralsight and teach courses on topics including Artificial Intelligence (AI) and machine learning (ML), big data, computer science and the cloud. In recent years, both have seen first-hand the development of these technologies, the different tools that organisations are investing in and the changing ways they are used.

See more stories here.

More information about Irish Tech News and the Business Showcase

FYI the ROI for you is => Irish Tech News now gets over 1.5 million monthly views, and up to 900k monthly unique visitors, from over 160 countries. We have over 860,000 relevant followers on Twitter on our various accounts & were recently described as Irelands leading online tech news site and Irelands answer to TechCrunch, so we can offer you a good audience!

Since introducing desktop notifications a short time ago, which notify readers directly in their browser of new articles being published, over 16000 people have now signed up to receive them ensuring they are instantly kept up to date on all our latest content. Desktop notifications offer a unique method of serving content directly to verified readers and bypass the issue of content getting lost in peoples crowded news feeds.

Drop us a line if you want to be featured, guest post, suggest a possible interview, or just let us know what you would like to see more of in our future articles. Were always open to new and interesting suggestions for informative and different articles. Contact us, by email, twitter or whatever social media works for you and hopefully we can share your story too and reach our global audience.

Irish Tech News

If you would like to have your company featured in the Irish Tech News Business Showcase, get in contact with us at [emailprotected] or on Twitter: @SimonCocking

Here is the original post:

8 Artificial Intelligence, Machine Learning and Cloud Predictions To Watch in 2020 - Irish Tech News

LTTE: It’s important to know of weaponized artificial intelligence – Rocky Mountain Collegian

Editors Note:All opinion section content reflects the views of the individual author only and does not represent a stance taken by The Collegian or its editorial board. Letters to the Editor reflect the view of a member of the campus community and are submitted to the publication for approval.

To the Editor,

I am writing this essay to bring awareness and recognition to a fast-approaching topic in the field of military technology weaponized artificial intelligence.

Weaponized AI is any military technology that operates off a computer system that makes its own decisions. Simply put, anything that automatically decides a course of action against an enemy without human control would fall under this definition.

Weaponized AI is a perfect example of a sci-fi idea that has found its way into the real world and is not yet completely understood. This said, weaponized AI places global security at risk and must be recognized by institutions like Colorado State University before it becomes widely deployed on the battlefield.

Nations are constantly racing to employ the next best weapon as it is developed. AI is no exception. Currently, AI is responsible for the one of the largest technology competitions since that of nuclear weapons during the Cold War. At the top of this competition is China and the United States.

With little to no international restrictions on the deployment of AI weaponry, a modern arms race will continue to develop, creating tensions between world powers as fear of the opposing team reaching the perfect AI weapon arises.

The other inherent danger is the gap that is being created between advanced world powers and countries who are incapable of developing such technology. The tendency for global conflict to occur between these nations increases, as powers that wield weaponized AI have a distinct edge over countries that do not employ AI. This allows room for misuse of this power given the lack of international regulations on using this tech.

What we have is a blurring of moral boundaries as we come closer to allowing this technology to determine who is a true threat.

Going further, my studies have shown that this technology poses considerable risk to international human rights laws. In its current state, weaponized AI is found to be unreliable in doing what it is intended to do. As an example, Project Maven, a current AI used by the United States, only identifies military threats using complex algorithms.

While this seems harmless, the direction in which the world is taking this technology is not. What would happen if this technologys unreliability costs innocent lives due to a targeting error that AIs, like Project Maven, are prone to making? Likewise, who would take responsibility for the actions of a machine?

What we have is a blurring of moral boundaries as we come closer to allowing this technology to determine who is a true threat. These kinds of errors cannot be tolerated by the rules of modern warfare.

A final obstacle surrounding AI is the United Nations inability to come to a consensus on its use. Researcher Eugenio Garcia with the United Nations stated, Advanced military powers remain circumspect (guarded) about introducing severe restrictions on the use of these technologies.

Although people easily recognize the dangers that AI poses to national security, countries are not willing to restrict the development. Furthermore, with minimal current legislation on the unreliability of the technology, weaponized AI will move further than what we can control.

While I make these claims, one must recognize that the technology does offer the benefit of removing soldiers from the battlefield. However, nations around the world are not monitoring this rising issue.

Colorado State University, being a tier one research facility that has investment in military technology, will be the institution that does step up to the plate and recognize catastrophe before it happens. These threats to global security may not be present now, but if we do not advocate for international legislation, these dangers will become reality.

Sincerely,

Thomas Marshall

Third-year mechanical engineering student at CSU

Working under Azer Yalin as an undergraduate research assistant exploring Air Force technology

The Collegians opinion desk can be reached atletters@collegian.com. To submit a letter to the editor, pleasefollow the guidelines at collegian.com.

Go here to see the original:

LTTE: It's important to know of weaponized artificial intelligence - Rocky Mountain Collegian

Artificial intelligence is writing the end of Beethoven’s unfinished symphony – Euronews

In the run-up to Ludwig van Beethoven's 250th birthday, a team of musicologists and programmers is using artificial intelligence to complete the composer's unfinished tenth symphony.

The piece was started by Beethoven alongside his famous ninth, which includes the well-known Ode To Joy.

But by the time the German composer died in 1827, there were only a few notes and drafts of the composition.

The experiment risks failing to do justice to the beloved German composer. Tthe team said the first few months yielded results that sounded mechanical and repetitive.

But now the project leader, Matthias Roeder, from the Herbert von Karajan Institute, insists the AI's latest compositions are more promising.

"An AI system learns an unbelievable amount of notes in an extremely short time," said Roeder. "And the first results are a bit like with people, you say 'hmm, maybe it's not so great'. But it keeps going and, at some point, the system really surprises you. And that happened the first time a few weeks ago. We're pleased that it's making such big strides."

The group is in the process of training an algorithm that will produce a completed symphony. They're doing this by playing snippets of Beethoven's work and leaving the computer to improvise the rest of it. Afterwards, they correct the improvisation so it fits with the composer's style.

Similar projects have been undertaken before. Schubert's eighth symphony was finished using AI developed by Huawei. It received mixed reviews.

The final result of the project will be performed by a full orchestra on 28 April next year in Bonn as part of a series of celebrations of Beethoven's work.

The year of celebrations begins on December 16th with the opening of his home in Bonn as a museum after renovation.

See the rest here:

Artificial intelligence is writing the end of Beethoven's unfinished symphony - Euronews

Artificial Intelligence Job Demand Could Live Up to Hype – Dice Insights

Anyone whos worked in technology knows that certainbuzzwords rip through the industry every few years, sending executives into afever. Artificial intelligence, Big Data, Hadoop, and Web 2.0 (please,lets do our best to forget that last one) are just a few of the more notable.But which ones will translate into actual opportunities and jobs for all thetechnologists out there?

If the hype doesnt match the actual industry impact, thenmany thousands of workers will have pursued a particular technology ordiscipline for nothing. But if the hype is justified, then folks can buildsatisfying careers (and make a lot of money). The stakes couldnt be higher.

As we head into 2020, one thing is pretty clear: Artificial intelligence (A.I.) seems like one of those much-hyped terms that might actually translate into a really robust sub-industry. For example, LinkedIns 2020 Emerging Jobs Report (PDF) puts Artificial Intelligence Specialist as its number-one emerging job, with 74 percent annual growth over the past four years.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

That outpaced robotics engineer (40 percent annual growthduring the same four-year period), datascientist (37 percent annual growth), full stack engineer (35 percentannual growth), and site reliability engineer (34 percent growth). (In order toarrive at its conclusions, LinkedIn crunched data from all of its publicprofiles over the past five years.)

Sounds pretty solid, right? Even so, the A.I. industry comeswith a relatively high bar to entry, which could restrict the pipeline oftalent for the next few years. Employers want A.I. experts skilled in machinelearning, deep learning, Python,natural language processing, and platforms suchas TensorFlow. Those are skills that take quite some time to learn, to putit mildly, and demand a pretty strong background in programming andmathematics.

Theres also the issue of company buy-in. Executives lovebuzzwords, but they often balk at the cost of spinning up the relatedtechnology. At this years The WallStreet Journals Future of Everything Festival, Arvind Krishna, IBMs seniorvice president of cloud and cognitive software, suggested that projects tend todie once companies realize theyll need to spend a lot of time prepping thenecessary datasets: And so you run out of patience along the way, because youspend your first year just collecting and cleansing the data.

Plus, existing A.I. initiatives have amixed track record so far. Ubers attempt to build a self-driving car platformhas hit some snags, toput it mildly; IBMs much-hyped Watson platform has failedto meet some hospitals expectations for successful healthcare dataanalysis; and some analysts and pundits think that even well-monetized projectssuch as Googles DeepMind haventeither scaled or commercialized.

Nonetheless, the future seems prettybright for artificial intelligence and machine-learning initiatives. Even ifsome high-profile projects crash and burn, its clear from the data thatcompanies are rapidly hiring various types of employees with A.I. skillsclusters. According to Burning Glass, which analyzes millions of job postingsfrom across the U.S., jobs that involve A.I. are projected to grow 40.1 percentover the next decade; the median salary for these positions is $105,007 (forthose with a PhD, it drifts up to $112,300).

Positions associated with A.I. skills clusters include:

If you work in any of these roles, A.I. and machine learningtools and techniques will likely become a part of your workflow over the nextseveral years. That means its important to learn as much as possible about A.I.Fortunately, there are a lot of resources online that can help you out,including a Google crashcourse,completewith 25 lessons and 40+ exercises, thats a good introduction to machinelearning concepts. HackerNoonalso offersan interesting breakdown of machine learning andartificialintelligence.

Read the original post:

Artificial Intelligence Job Demand Could Live Up to Hype - Dice Insights