Artificial Intelligence in Transportation Industry Market Trends, Growth, Scope, Size, Overall Analysis and Forecast by 2025 – CueReport

A new research study has been presented offering a comprehensive analysis on the Global Artificial Intelligence in Transportation Industry market where user can benefit from the complete market research report with all the required useful information about this market. This is a latest report, covering the current COVID-19 impact on the market. The pandemic of Coronavirus (COVID-19) has affected every aspect of life globally. This has brought along several changes in market conditions. The rapidly changing market scenario and initial and future assessment of the impact is covered in the report. The report discusses all major Artificial Intelligence in Transportation Industry market aspects with expert opinion on current market status along with historic data. Artificial Intelligence in Transportation Industry Industry report is a detailed study on the growth, investment opportunities, market statistics, growing competition analysis, major key players, industry facts, revenues, market shares, business strategies, top regions, demand, and developments.

The Artificial Intelligence in Transportation Industry market report contains an extensive analysis of this industry space and provides crucial insights regarding the major factors that are impacting the remuneration graph as well as fueling the industry growth. The study also offers a granular assessment of the regional spectrum alongside regulatory outlook of this market space. Moreover, the document measures the factors that are positively influencing the market outlook as well as presents a detailed SWOT analysis. Information such as limitations & restraints faced by new entrants and market majors alongside their individual effect on the growth rate of the companies is enlisted. The research also elaborates on the impact of COVID-19 on future remuneration and growth avenues of the market.

Request Sample Copy of this Report @ https://www.cuereport.com/request-sample/13774

Highlighting the competitive framework of Artificial Intelligence in Transportation Industry market:

Request Sample Copy of this Report @ https://www.cuereport.com/request-sample/13774

From the regional point of view of Artificial Intelligence in Transportation Industry market:

Additional data emphasized in the Artificial Intelligence in Transportation Industry market report:

Years considered for this report:

Key Questions Answered In this Report:

What is the overall Artificial Intelligence in Transportation Industry market size in 2019? What will be the market growth during the forecast period i.e. 2020-2025?

Which region would have high demand for product in the upcoming years?

What are the factors driving the growth of the Artificial Intelligence in Transportation Industry market?

Which sub-market will make the most significant contribution to the market?

What are the Artificial Intelligence in Transportation Industry market opportunities for existing and entry-level players?

What are various long-term and short-term strategies adopted by the market players?

What are the key business strategies being adopted by new entrants in the Artificial Intelligence in Transportation Industry market?

Request Customization on This Report @ https://www.cuereport.com/request-for-customization/13774

Go here to see the original:
Artificial Intelligence in Transportation Industry Market Trends, Growth, Scope, Size, Overall Analysis and Forecast by 2025 - CueReport

inHEART Raises $4.2 Million to Improve Treatments for Cardiac Arrhythmias With Medical Imaging, Artificial Intelligence and Numerical Simulations -…

PESSAC, France, July 2, 2020 /PRNewswire/ -- inHEART, providing a cloud-based medical image analysis solution for cardiac interventions on patients with arrhythmias, has closed a round of EUR 3.7 million led by Elaia. These funds will be used to accelerate commercial development in Europe, access the US market, and advance its technology leadership with continued development of AI and numerical simulations of cardiac electrical activity.

"We are very happy to have Elaia joining inHEART in this adventure not only for its financial support but also for the dynamism and the expertise of its team in developing startups in healthcare. Elaia and inHEART share the same ambition to create a major player in cardiac electrophysiology, a EUR 5 billion market with healthcare giants such as Johnson & Johnson, Abbott, Boston Scientific or Medtronic," said Jean-Marc Peyrat, CEO and co-founder of inHEART.

Cardiac arrhythmias, a public health issue without miracle solution

Heart rhythm disorders, notably as the cause of sudden cardiac death, is a major cause of morbidity in the world. Current treatment solutions for patients with arrhythmias are not optimal whether considering drugs, implantable devices or even catheter ablation procedures that are lengthy, complex and expensive. For instance, a repeat procedure is needed in 40% of patients with ventricular arrhythmias due to recurrence.

More timely and effective procedures

inHEART provides a cloud-based software solution that transforms preoperative medical images into a 3D digital twin of the patient's heart. This digital twin enables the cardiologist to better plan the procedure and also to assist in navigating instruments in the patient's heart, substantially reducing procedure duration and failure rates.

A compelling technology with exciting perspectives for the future

inHEART technology has been used on more than 2000 patients in 40 centers around the world and included in the latest international expert recommendations.

"I can't do without it anymore. My procedures are much simpler and faster. My patients directly benefit from this technology and their arrhythmias are less recurrent. Future will be even brighter with artificial intelligence that already allows us to very quickly process patient scans and that will enable tomorrow the prediction of patients at risk and the refinement of therapeutic strategies." commented Prof. Pierre Jas, cardiologist and co-founder of inHEART.

French academic excellence at the heart of the project

inHEART is a spin-off from IHU Liryc and Inria, two worldwide leading centers respectively in cardiac electrophysiology and digital science and technology. inHEART was founded by an experienced team of scientists and physicians, including Maxime Sermesant, expert in AI and cardiac modeling, Hubert Cochet, radiologist expert in cardiac imaging, and Pierre Jas, cardiologist who is a pioneer and international key opinion leader in cardiac catheter ablation.

"After more than ten years of multidisciplinary collaboration as only a few exist in the world between cardiologists, radiologists, engineers and researchers in computer science, we have a disruptive technology that answers a real clinical need," added Prof. Hubert Cochet, radiologist and co-founder of inHEART.

"inHEART is the perfect match between an advanced technology from Inria and IHU Liryc, known as a center of excellence in cardiac electrophysiology, all of it coordinated by experts who we are glad to follow in this beautiful adventure. Their computational solution to model the heart in 3D revolutionizes cardiac catheter ablation procedures and is already deployed internationally with excellent clinical feedbacks," saidSamantha Jrusalmy, partner at Elaia.

About inHEART

inHEART is a spin-off from IHU Liryc and Inria, two top-tier research centers respectively in cardiac electrophysiology and digital science and technology, that develops software solutions for medical image analysis and cardiac modeling in heart rhythm disorders. inHEART vision is to make the bridge between radiology and cardiology to become a worldwide leader in image-guided diagnosis, therapy planning and navigation software solutions for heart rhythm disorders.

Learn more: http://www.inheart.fr

About Elaia

Elaia is a European top-tier VC firm with a strong tech DNA. We back tech disruptors with global ambition from early stage to growth development. For the past 17 years, our commitment has been to deliver high performance with values. We are proud to have been an active partner in over 70 startups including success stories such as Criteo (Nasdaq), Orchestra Networks (acquired by Tibco), Sigfox, Teads (acquired by Altice), Mirakl and Shift Technology.

Learn more: http://www.elaia.com

Contact:Jean-Marc Peyrat, [emailprotected] +33 (0) 5 35 38 19 72

SOURCE inHEART

https://www.inheart.fr/

Go here to read the rest:
inHEART Raises $4.2 Million to Improve Treatments for Cardiac Arrhythmias With Medical Imaging, Artificial Intelligence and Numerical Simulations -...

What Defines Artificial Intelligence? The Complete WIRED …

Artificial intelligence is overhypedthere, we said it. Its also incredibly important.

Superintelligent algorithms arent about to take all the jobs or wipe out humanity. But software has gotten significantly smarter of late. Its why you can talk to your friends as an animated poop on the iPhone X using Apples Animoji, or ask your smart speaker to order more paper towels.

Tech companies heavy investments in AI are already changing our lives and gadgets, and laying the groundwork for a more AI-centric future.

The current boom in all things AI was catalyzed by breakthroughs in an area known as machine learning. It involves training computers to perform tasks based on examples, rather than by relying on programming by a human. A technique called deep learning has made this approach much more powerful. Just ask Lee Sedol, holder of 18 international titles at the complex game of Go. He got creamed by software called AlphaGo in 2016.

For most of us, the most obvious results of the improved powers of AI are neat new gadgets and experiences such as smart speakers, or being able to unlock your iPhone with your face. But AI is also poised to reinvent other areas of life. One is health care. Hospitals in India are testing software that checks images of a persons retina for signs of diabetic retinopathy, a condition frequently diagnosed too late to prevent vision loss. Machine learning is vital to projects in autonomous driving, where it allows a vehicle to make sense of its surroundings.

Theres evidence that AI can make us happier and healthier. But theres also reason for caution. Incidents in which algorithms picked up or amplified societal biases around race or gender show that an AI-enhanced future wont automatically be a better one.

The Beginnings of Artificial Intelligence

Artificial intelligence as we know it began as a vacation project. Dartmouth professor John McCarthy coined the term in the summer of 1956, when he invited a small group to spend a few weeks musing on how to make machines do things like use language. He had high hopes of a breakthrough toward human-level machines. We think that a significant advance can be made, he wrote with his co-organizers, if a carefully selected group of scientists work on it together for a summer.

Moments that Shaped AI

1956

The Dartmouth Summer Research Project on Artificial Intelligence coins the name of a new field concerned with making software smart like humans.

1965

Joseph Weizenbaum at MIT creates Eliza, the first chatbot, which poses as a psychotherapist.

1975

Meta-Dendral, a program developed at Stanford to interpret chemical analyses, makes the first discoveries by a computer to be published in a refereed journal.

1987

A Mercedes van fitted with two cameras and a bunch of computers drives itself 20 kilometers along a German highway at more than 55 mph, in an academic project led by engineer Ernst Dickmanns.

1997

IBMs computer Deep Blue defeats chess world champion Garry Kasparov.

2004

The Pentagon stages the Darpa Grand Challenge, a race for robot cars in the Mojave Desert that catalyzes the autonomous-car industry.

2012

Researchers in a niche field called deep learning spur new corporate interest in AI by showing their ideas can make speech and image recognition much more accurate.

2016

AlphaGo, created by Google unit DeepMind, defeats a world champion player of the board game Go.

Those hopes were not met, and McCarthy later conceded that he had been overly optimistic. But the workshop helped researchers dreaming of intelligent machines coalesce into a proper academic field.

Early work often focused on solving fairly abstract problems in math and logic. But it wasnt long before AI started to show promising results on more human tasks. In the late 1950s Arthur Samuel created programs that learned to play checkers. In 1962 one scored a win over a master at the game. In 1967 a program called Dendral showed it could replicate the way chemists interpreted mass-spectrometry data on the makeup of chemical samples.

As the field of AI developed, so did different strategies for making smarter machines. Some researchers tried to distill human knowledge into code or come up with rules for tasks like understanding language. Others were inspired by the importance of learning to human and animal intelligence. They built systems that could get better at a task over time, perhaps by simulating evolution or by learning from example data. The field hit milestone after milestone, as computers mastered more tasks that could previously be done only by people.

Deep learning, the rocket fuel of the current AI boom, is a revival of one of the oldest ideas in AI. The technique involves passing data through webs of math loosely inspired by how brain cells work, known as artificial neural networks. As a network processes training data, connections between the parts of the network adjust, building up an ability to interpret future data.

Artificial neural networks became an established idea in AI not long after the Dartmouth workshop. The room-filling Perceptron Mark 1 from 1958, for example, learned to distinguish different geometric shapes, and got written up in The New York Times as the Embryo of Computer Designed to Read and Grow Wiser. But neural networks tumbled from favor after an influential 1969 book co-authored by MITs Marvin Minsky suggested they couldnt be very powerful.

Not everyone was convinced, and some researchers kept the technique alive over the decades. They were vindicated in 2012, when a series of experiments showed that neural networks fueled with large piles of data and powerful computer chips could give machines new powers of perception.

View original post here:
What Defines Artificial Intelligence? The Complete WIRED ...

Increasing Transparency at the National Security Commission on Artificial Intelligence – Lawfare

In 2018, Congress established the National Security Commission on Artificial Intelligence (NSCAI)a temporary, independent body tasked with reviewing the national security implications of artificial intelligence (AI). But two years later, the commissions activities remain little known to the public. Critics have charged that the commission has conducted activities of interest to the public outside of the public eye, only acknowledging that meetings occurred after the fact and offering few details on evolving commission decision-making. As one commentator remarked, Companies or members of the public interested in learning how the Commission is studying AI are left only with the knowledge that appointed people met to discuss these very topics, did so, and are not yet releasing any information about their recommendations.

That perceived lack of transparency may soon change. In June, the U.S. District Court for the District of Columbia handed down its decision in Electronic Privacy Information Center v. National Security Commission on Artificial Intelligence, holding that Congress compelled the NSCAI to comply with the Federal Advisory Committee Act (FACA). Under FACA, the commission must hold open meetings and proactively provide records and other materials to the public. This decision follows a ruling from December 2019, holding that the NSCAI must also provide historical documents upon request under the Freedom of Information Act (FOIA). As a result of these decisions, the public is likely to gain increased access to and insight into the once-opaque operations of the commission.

Lawmakers established the NSCAI in the John S. McCain National Defense Authorization Act (NDAA) for fiscal 2019 1051, which tasked the commission with consider[ing] the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States. The commissions purview includes an array of issues related to the implications and uses of artificial intelligence and machine learning for national security and defense, including U.S. competitiveness and leadership, research and development, ethics, and data standards.

The NSCAI is currently chaired by Eric Schmidt, the former executive chairman of Googles parent company, Alphabet. The commissions 15 membersappointed by a combination of Congress, the secretary of defense and the secretary of commercereceive classified and unclassified briefings, meet in working groups and engage with industry. They report their findings and recommendations to the president and Congress, including in an annual report.

The Electronic Privacy Information Center (EPIC), a research center focused on privacy and civil liberties issues in the digital age, submitted a request to the NSCAI in September 2019, seeking access to upcoming meetings and records prepared by the commission under FACA and FOIA. In the six-month period prior to the request, the NSCAI held more than a dozen meetings and received more than 100 briefings, according to EPIC. At the time it filed the lawsuit, EPIC noted that the commissions first major report was also one month overdue for release. When the commission did not comply with the requests under FOIA and FACA, EPIC brought suit under the two laws.

EPICs complaint alleged that the NSCAI had conducted its operations opaquely in its short lifespan. Since its establishment, the commission has operated almost entirely in secret with meetings behind closed doors[,] and has failed to publish or disclose any notices, agendas, minutes, or materials. If Congress had intended the NSCAI to comply with FOIA and FACA, such activity would not satisfy the statutes requirements. Given the potential implications of federal artificial intelligence decisions for privacy, cybersecurity, human rights, and algorithmic bias, EPIC argued that [p]ublic access to the records and meetings of the AI Commission is vital to ensure government transparency and democratic accountability. The complaint also noted the potential ramifications of commission activities for the government, private sector, and public, as well as the importance of artificial intelligence safeguards in the national security context due to limited public oversight. According to EPIC, increasing public participation would permit greater input into the development of national AI policy by those whose privacy and data security could potentially be affected.

The U.S. District Court for the District of Columbia addressed EPICs FOIA claim in a December 2019 decision. FOIA requires agencies to disclose their records to a party upon request, barring exemptions (including for information classified to protect national security). EPIC alleged that the NSCAI failed to uphold its obligations under FOIA to process FOIA requests in a timely fashion; to process EPICs FOIA requests in an expedited manner, in accordance with EPICs claims of urgency; and to make available for public inspection and copying its records, reports, transcripts, minutes, appendixes, working papers, drafts, studies, agenda, or other documents. The commission, which at the time did not have a FOIA processing mechanism in place or other pending FOIA requests, argued that it was not an agency subject to FOIA.

The courts inquiry centered on whether the NSCAI is an agency under FOIA. Comparing the language establishing the NSCAI with FOIAs definition of agency, the court held that the NSCAI is subject to FOIA. In his decision, District Judge Trevor McFadden noted that Congress could have hardly been clearer. As a result, since that time, the commission has had to produce historical documents in response to FOIA requests.

FACA, by contrast, applies forward-looking requirements specifically to federal advisory committees. These mandates include requiring committees to open meetings to the public and announce them in the Federal Register, and to make reports, transcripts and other commission materials publicly available. The measures aim to inform the public about and invite public engagement with the committees that provide expertise to the executive branch. EPIC alleged that the NSCAI violated FACA by failing to hold open meetings and provide notice of them, and by failing to make records available to the public. EPIC sought mandamus relief pursuant to the alleged FACA violations.

In its June decision, the district court ruled that FACA applies to the NSCAI. The commission had filed a motion to dismiss the FACA claims, arguing that it could not be subject to both FOIA and FACA. Since the court had previously held the NSCAI to be an agency for purposes of FOIA, the commission reasoned that it could not simultaneously be an advisory committee under FACA. McFadden disagreed. Invoking the Roman God Januss two facesone forward-looking and the other backward-facinghe wrote, [L]ike Janus, the Commission does indeed have two faces, and ... Congress obligated it to comply with FACA as well as FOIA. The court could not identify a conflict between the requirements of the two statutes, despite differences in their obligations and exceptions. Rather, it noted that if such conflicts arise, it will be incumbent on the parties and the Court to resolve any difficulties. The court dismissed additional claims under the Administrative Procedure Act (APA) for lack of subject matter jurisdiction, as it determined that the commission is not an agency under the APA definition.

The courts decision turned on whether the NSCAI is an advisory committee subject to FACA. The court determined that the statutory text of the 2019 NDAA establishing the NSCAI fit[s] the [FACA] definition of advisory committee like a glove. Furthermore, turning to the full text of the 2019 NDAA, the court noted that the law contains at least two instances in which it explicitly exempts a government body from FACA. The court read the 2019 NDAA as silent when FACA applies and explicit when FACA does not apply. Given Congresss silence on the applicability of FACA to the NSCAI in the 2019 NDAAand again in the 2020 NDAAthe court reasoned that Congress intended the NSCAI to be subject to FACA.

In determining the NSCAI to be subject to FACA, in addition to FOIA, the court has compelled the commission to adopt a more transparent operating posture going forward. Since the December 2019 decision on FOIA, the NSCAI has produced a number of historical records in response to FOIA requests. The recent ruling on FACA grounds requires the NSCAI to hold open meetings, post notice of meetings in advance and make documents publicly available. As a result, the commissions process of compiling findings and developing recommendations for government action related to artificial intelligence and machine learning will likely become more accessible to the public.

The two court decisions come in time to have a noticeable impact on the remaining term of the temporary commission. While the NSCAI was previously due to disband later in 2020, the NDAA for fiscal 2020 1735 extended the commissions lifespan by one year, to October 1, 2021. Citing federal budgetary timelines and the pace of AI development, the commission released its first set of recommendations in March 2020 and expressed its intent to publish additional recommendations on a quarterly basis thereafter. The commission is due to submit its final report to Congress by March 1, 2021. As the NSCAI prepares to enter its final year of operations and develop its closing recommendations, the public will have a clearer window into the commissions work.

View post:
Increasing Transparency at the National Security Commission on Artificial Intelligence - Lawfare

Artificial Intelligence Systems Will Need to Have Certification, CISA Official Says – Nextgov

Vendors of artificial intelligence technology should not be shielded by intellectual property claims and will have to disclose elements of their designs and be able to explain how their offering works in order to establish accountability, according to a leading official from the Cybersecurity and Infrastructure Security Agency.

I dont know how you can have a black-box algorithm thats proprietary and then be able to deploy it and be able to go off and explain whats going on, said Martin Stanley, a senior technical advisor who leads the development of CISAs artificial intelligence strategy. I think those things are going to have to be made available through some kind of scrutiny and certification around them so that those integrating them into other systems are going to be able to account for whats happening.

Stanley was among the speakers on a recent Nextgov and Defense One panel where government officials, including a member of the National Security Commission on Artificial Intelligence, shared some of the ways they are trying to balance reaping the benefits of artificial intelligence with risks the technology poses.

Experts often discuss the rewards of programming machines to do tasks humans would otherwise have to labor onfor both offensive and defensive cybersecurity maneuversbut the algorithms behind such systems and the data used to train them into taking such actions are also vulnerable to attack. And the question of accountability applies to users and developers of the technology.

Artificial intelligence systems are code that humans write, but they exercise their abilities and become stronger and more efficient using data that is fed to them. If the data is manipulated, or poisoned, the outcomes can be disastrous.

Changes to the data could be things that humans wouldnt necessarily recognize, but that computers do.

Weve seen ... trivial alterations that can throw off some of those results, just by changing a few pixels in an image in a way that a person might not even be able to tell, said Josephine Wolff, a Tufts University cybersecurity professor who was also on the panel.

And while its true that behind every AI algorithm is a human coder, the designs are becoming so complex, that youre looking at automated decision-making where the people who have designed the system are not actually fully in control of what the decisions will be, Wolff says.

This makes for a threat vector where vulnerabilities are harder to detect until its too late.

With AI, theres much more potential for vulnerabilities to stay covert than with other threat vectors, Wolff said. As models become increasingly complex it can take longer to realize that something is wrong before theres a dramatic outcome.

For this reason, Stanley said an overarching factor CISA uses to help determine what use cases AI gets applied to within the agency, is to assess the extent to which they offer high benefits and low regrets.

We pick ones that are understandable and have low complexity, he said.

Among other things federal personnel need to be mindful of is who has access to the training data.

You can imagine you get an award done, and everyone knows how hard that is from the beginning, and then the first thing that the vendor says is OK, send us all your data, hows that going to work so we can train the algorithm? he said. Those are the kinds of concerns that we have to be able to address.

Were going to have to continuously demonstrate that we are using the data for the purpose that it was intended, he said, adding, Theres some basic science that speaks to how you interact with algorithms and what kind of access you can have to the training data. Those kinds of things really need to be understood by the people who are deploying them.

A crucial but very difficult element to establish is liability. Wolff said ideally, liability wouldbe connected to a potential certification program where an entity audits artificial intelligence systems for factors like transparency and explainability.

Thats important, she said, for answering the question of how can we incentivize companies developing these algorithms to feel really heavily the weight of getting them right and be sure to do their own due diligence knowing that there are serious penalties for failing to secure them effectively.

But this is hard, even in the world of software development more broadly.

Making the connection is still very unresolved. Were still in the very early stages of determining what would a certification process look like, who would be in charge of issuing it, what kind of legal protection or immunity might you get if you went through it, she said. Software developers and companies have been working for a very long time, especially in the U.S., under the assumption that they cant be held legally liable for vulnerabilities in their code, and when we start talking about liability in the machine learning and AI context, we have to recognize that thats part of what were grappling with, an industry that for a very long time has had very strong protections from any liability.

View from the Commission

Responding to this, Katharina McFarland, a member of the National Security Commission on Artificial Intelligence, referenced the Pentagons Cybersecurity Maturity Model Certification program.

The point of the CMMC is to establish liability for Defense contractors, Defense Acquisitions Chief Information Security Officer Katie Arrington has said. But McFarland highlighted difficulties facing CMMC that program officials themselves have acknowledged.

Im sure youve heard of the [CMMC], theres a lot of thought going on, the question is the policing of it, she said. When you consider the proliferation of the code thats out there, and the global nature of it, you really will have a challenge trying to take a full thread and to pull it through a knothole to try to figure out where that responsibility is. Our borders are very porous and machines that we buy from another nation may not be built with the same biases that we have.

McFarland, a former head of Defense acquisitions, stressed that AI is more often than not viewed with fear and said she wanted to see more of a balance in procurement considerations for the technology.

I found that we had a perverse incentive built into our system and that was that we took, sometimes, I think extraordinary measures to try to creep into the one percent area for failure, she said, In other words, we would want to 110% test a system and in doing so, we might miss the venue of where its applicability in a theater to protect soldiers, sailors, airmen and Marines is needed.

She highlighted upfront a need for testing a verification but said it shouldnt be done at the expense of adoption. To that end, she asks that industry help by sharing the testing tools they use.

I would encourage industry to think about this from the standpoint of what tools would we needbecause theyre using themin the department, in the federal space, in the community, to give us transparency and verification, she said, so that we have a high confidence in the utility, in the data that were using and the AI algorithms that were building.

Follow this link:
Artificial Intelligence Systems Will Need to Have Certification, CISA Official Says - Nextgov

Artificial intelligence levels show AI is not created equal. Do you know what the vendor is selling? – Spend Matters

Just like there are eight levels to analytics as mentioned in a recent Spend Matters PRO brief, artificial intelligence (AI) has various stages of the technology today even though there is no such thing as true AI by any standard worth its technical weight.

But just because we dont yet have true AI doesnt mean todays AI cant help procurement improve its performance. We just need enough computational intelligence to allow software to do the tactical and non-value-added tasks that software should be able to perform with all of the modern computational power available to us. As long as the software can do the tasks as well as an average human expert the vast majority of the time (and kick up a request for help when it doesnt have enough information or when the probability it will outperform a human expert is less than the expert performing a task) thats more than good enough.

The reality is, for some basic tactical tasks, there are plenty of software options today (e.g., intelligent invoice processing). And even for some highly specialized tasks that we thought could never be done by a computer, we have software that can do it better, like early cancerous growth detection in MRIs and X-rays.

That being said, we also have a lot of software on the market that claims to be artificial intelligence but that is not even remotely close to what AI is today, let alone what useful software AI should be. For software to be classified as AI today, it must be capable of artificial learning and evolving its models or codes and improve over time.

So, in this PRO article, we are going to define the levels of AI that do exist today, and that may exist tomorrow. This will allow you to identify what truth there is to the claims that a vendor is making and whether the software will actually be capable of doing what you expect it to.

Not counting true AI, there are five levels of AI that are available today or will likely be available tomorrow:

Lets take a look at each group.

See more here:
Artificial intelligence levels show AI is not created equal. Do you know what the vendor is selling? - Spend Matters

Protecting inventions which use Machine Learning and Artificial Intelligence – Lexology

Protecting inventions which use Machine Learning and Artificial Intelligence

There has been a lot of talk recently about the DABUS family of patent applications where DABUS, an artificial intelligence (AI), was named as an inventor. This has prompted a lot of discussion around whether an inventor must be a human being and there is no doubt that this discussion will continue as AI finds its way into more and more aspects of our lives.

However, one of the other parts of the discussion around AI in patents is around the patentability of inventions which apply machine learning (ML) and AI based concepts to the solution of technical problems.

Why consider patent protection?

Patents protect technical innovations and technical solutions to problems. They can offer broad legal protection for the technical concept you develop, albeit in exchange for disclosure of the invention.

Here in the UK, a patent can give you the right to prevent others from exploiting your invention and can help you to mark out legal exclusivity around a patented product.

Can I not just keep the invention a secret?

It is an option to utilise the invention as a trade secret, but the protection of the trade secret involves considerable effort to implement the technical and administrative environment which will enable the trade secret to stay as a secret. This can include changing your physical workplace to confine certain environments where trade secret-protected inventions are being used. This can also include implementing technical measures to inhibit access to trade secrets from unauthorised individuals. Such technical measures are particularly important for AI and ML-focused inventions as they are often embodied in computer program code which can simply be transferred from one computer to another

What is perhaps more pertinent is that if your AI or ML-enabled concept is to be implemented in association with hardware which is to be sold publicly, then this will by definition negate the value of the concept as a trade secret as it will become publicly available. It may require decompilation or reverse engineering to access the code, but this does not mean that the code is secret.

There may be additional know-how associated with your invention which is worth protecting as a trade secret but as part of a suite of IP rights (including patents) which are focused on protecting your invention.

How much information does the patent application require?

All patent applications are drafted for the skilled person who in this context would be somebody skilled in the techniques of ML and AI, although not necessarily an expert. That is to say, it needs to be enough information to enable such a person to put the invention into effect.

This should include technical information about features which provide an advantage over previous systems and clear identification of advantageous features and why they are advantageous. This will give your Patent Attorney the best possible chance of framing the invention in a way which convinces patent offices around the world to grant a patent.

It is also advisable to include disclosure of at least one set of training data and details of how it has been trained.

In the context of AI and ML it is particularly important to draw attention to technically advantageous features as some patent offices will need a lot of convincing to grant patents for these inventions. It is particularly useful to draw attention to features which solve technical problems or are motivated by technical considerations rather than economic or commercial considerations.

The EPO have stressed that patents will be granted when ML or AI based inventions are limited to a specific technical application or required a specific technical implementation which are directed to a technical purpose. These advantages and details of implementation will enable a patent attorney skilled in drafting patent applications for ML/AI to present your invention in the best light as possible from the perspective of the EPO or the UKIPO as they will enable us to clearly set out how the invention delivers the technical application and solves the technical problem.

Our software patents team are specifically noted for their skill in drafting computer implemented inventions for the UKIPO and the EPO.

Although a lot of information is required, we do not necessarily need program code. It would help, however, to at least include a pseudocode description of the invention so that we can garner an understanding of how the invention works as a series of steps this helps with the description.

Are AI and ML not just like software, i.e. cannot be patented?

It is possible to patent software-based inventions but, like other inventions, the invention needs to solve a technical problem. This is the same with inventions which apply AI and ML.

AI and ML inventions are treated in Europe like other mathematical methods in that they are rejected as excluded from patentability if they do not solve a technical problem. It is best to illustrate this by example.

If your invention is to improve a technique which is used to analyse data such as, for example, your invention improves K-means clustering with no other benefit to a technical field, then you can expect to face considerable obstacles to obtaining a patent to protect your invention. However, if your invention applies K-means clustering to achieve a specific improvement to a specific technical system then you are likely to face less obstacle to obtaining a patent for your invention.

That is to say, when considering whether you wish to pursue patent protection for the technology you have developed then focus on what the innovation achieves in a technical field.

What if the technique has been applied elsewhere? Can I still get a patent?

Referring back to our K-means clustering example, if you see that K-means clustering has been used in sensing of rain droplets on a car window to determine the appropriate setting for the windscreen wipers, then that does not necessarily mean that you cannot get a patent for K-means clustering applied to determining the likelihood of a denial of service attack on a server.

That is to say, if you are applying known technology to a new field and solving a technical problem in that field, there is an arguable case for a patentable invention.

Are there differences between Europe, US and other jurisdictions?

The approach to these inventions across jurisdictions can be different and complete consistency is difficult to guarantee. However, in drafting your patent application we would seek to make the language as flexible as possible in order to admit differing interpretations of the law across jurisdictions and to give the prosecution of your patent applications in those jurisdictions the greatest possible chance of success.

What do I do next?

If you have developed technology which applies AI or ML, then consider whether you could achieve patent protection for that invention. Contact one of our software patent experts to discuss the invention and your options.

It is also useful to note that having a pending patent application can be a useful deterrent for competitors and the uncertainty created for third parties by the existence of the patent application can provide you with the space in the market to establish your exclusivity, develop your customer base and build your brand.

See original here:
Protecting inventions which use Machine Learning and Artificial Intelligence - Lexology

Artificial intelligence is on the rise – Independent Australia

New developments and opportunities are opening up in artificial intelligence, says Paul Budde.

I RECENTLY followed a "lunch box lecture", organised by the University of Sydney.In thetalk, Professor Zdenka Kuncic explored the very topical issue of artificial intelligence.

The world is infatuated with artificial intelligence (AI), and understandably so, given its super-human ability to find patterns in big data as we all notice when using Google, Facebook, Amazon, eBay and so on. But the so-called general intelligence that humans possess remains elusive forAI.

Interestingly, Professor Kuncic approached this topic from a physics perspective. By viewing the brains neural network as a physical hardware system, rather than the algorithm-based software as for example AI-based research used insocial media.

Her approach reveals clues that suggest the underlying nature of intelligence is physical.

Basically, what this means is that a software-based system will require ongoing input from software specialists to make updates based on new developments.Her approach, however, is to look at a physical system based on nanotechnology and use these networks as self-learning systems, where human intervention is no longer required.

Imagine the implications of the communications technologies that are on the horizon, where basically billions of sensors and devices will be connected to networks.

The data from these devices need to be processed in real-time and dynamic decisions will have to be made without human intervention. The driverless car is, of course, a classic example of such an application.

The technology needed to make such a system work will have to be based on edge technology in the device out there in the field. It is not going to work in any scaled-up situation if the data from these devices will first have to be sent to the cloud for processing.

Nano networks are a possible solution for such situations. A nanonetwork or nanoscale network is a set of interconnected nanomachines (devices a few hundred nanometers or a few micrometres at most in size), which at the moment can perform only very simple tasks such as computing, data storing, sensing and actuation.

However, Professor Kuncik expects that new developments will see expanded capabilities of single nanomachines both in terms of complexity and range of operation by allowing them to coordinate, share and fuse information.

Professor Kuncik concentrates, in her work, on electromagnetics for communication in the nanoscale.

This is commonly defined as the 'transmission and reception of electromagnetic radiation from components based on novel nanomaterials'.

Professor Kuncik mentioned this technology was still in its infancy. She was very upbeat about the future, based on the results of recent research and international collaboration. Advancements in carbon and molecular electronics have opened the door to a new generation of electronic nanoscale components such as nanobatteries, nanoscale energy harvesting systems, nano-memories, logical circuitry in the nanoscale and even nano-antennas.

From a communication perspective, the unique properties observed in nanomaterials will decide on the specific bandwidths for the emission of electromagnetic radiation, the time lag of the emission, or the magnitude of the emitted power forinput energy.

The researchers are looking at the output of these nanonetworks rather than the input. The process is analogue rather than digital. In other words, the potential output provides a range of possible choices, rather than one (digital) outcome.

The trick is to understand what choices are made in a nanonetwork and why.

There are two main alternatives for electromagnetic communication in the nanoscale the one as pursued by Professor Kuncik the other one being based on molecular communication.

Nanotechnology could have an enormous impact on for example the future of 5G. If nanotechnology can be included in the various Internet of Things (IoT) sensors and devices than this will open an enormous amount of new applications.

It has been experimentally demonstrated that is possible to receive and demodulate an electromagnetic wave by means of a nano radio.

Second, graphene-based nano-antennas have been analysed as potential electromagnetic radiators in the terahertz band.

Once these technologies are further developed and commercialised, we can see a revolution in edge-computing.

Paul Buddeis an Independent Australia columnist and managing director ofPaul Budde Consulting, an independent telecommunications research and consultancy organisation. You can follow Paul on Twitter@PaulBudde.

Support independent journalism Subscribeto IA.

More:
Artificial intelligence is on the rise - Independent Australia

How Coronavirus and Protests Broke Artificial Intelligence And Why Its A Good Thing – Observer

Until February 2020, Amazon thought that the algorithms that controlled everything from their shelf space to their promoted products were practically unbreakable. For years they had used simple and effective artificial intelligence (AI) to predict buying patterns, and planned their stock levels, marketing, and much more based on a simple question: who usually buys what?

Yet as COVID-19 swept the globe they found that the technology that they relied on was much more shakable than they had thought. As sales of hand sanitizer, face masks, and toilet paper soared, sites such as Amazon found that their automated systems were rendered almost useless as AI models were thrown into utter disarray.

Elsewhere, the use of AI in everything from journalism to policing has been called into question. As long-overdue action on racial inequalities in the US has been demanded in recent weeks, companies have been challenged for using technology that regularly displays sometimes catastrophic ethnic biases.

Microsoft was recently held to account after the AI algorithms that it used on its MSN news website confused mixed-race members of girlband Little Mix, and many companies have now suspended the sale of facial recognition technologies to law enforcement agencies after it was revealed that they are significantly less effective at identifying images of minority individuals, leading to potentially inaccurate leads being pursued by police.

The past month has brought many issues of racial and economic injustice into sharp relief, says Rediet Abebe, an incoming assistant professor of computer science at the University of California, Berkeley. AI researchers are grappling with what our role should be in dismantling systemic racism, economic oppression, and other forms of injustice and discrimination. This has been an opportunity to reflect more deeply on our research practices, on whose problems we deem to be important, whom we aim to serve, whom we center, and how we conduct our research.

SEE ALSO: Artificial Intelligence Is on the Case in the Legal Profession

From the COVID-19 pandemic to the Black Lives Matter protests, 2020 has been a year characterized by global unpredictability and social upheaval. Technology has been a crucial medium of effecting change and keeping people safe, from test and track apps to the widespread use of social media to spread the word about protests and petitions. But amidst this, machine learning AI has sometimes failed to meet its remit, lagging behind rapid changes in social behavior and falling short on the very thing that it is supposed to do best: gauging the data fed into it and making smart choices.

The problem often lies not with the technology itself, but in a lack of data used to build algorithms, meaning that they fail to reflect the breadth of our society and the unpredictable nature of events and human behavior.

Most of the challenges to AI that have been identified by the pandemic relate to the substantial changes in behavior of people, and therefore in the accuracy of AI models of human behavior, says Douglas Fisher, an associate professor of computer science at Vanderbilt University. Right now, AI and machine learning systems are stovepiped, so that although a current machine learning system can make accurate predictions about behaviors under the conditions under which it learned them, the system has no broader knowledge.

The last few months have highlighted the need for greater nuance in AIin short, we need technology that can be more human. But in a society increasingly experimenting with using AI to carry out such crucial roles as identifying criminal suspects or managing food supply chains how can we ensure that machine learning models are sufficiently knowledgeable?

Most challenges related to machine learning over the past months result from change in data being fed into algorithms, explains Kasia Borowska, Managing Director of AI consultancy Brainpool.ai. What we see a lot of these days is companies building algorithms that just about do the job. They are not robust, not scalable, and prone to bias this has often been due to negligence or trying to cut costsbusinesses have clear objectives and these are often to do with saving money or simply automating manual processes, and often the ethical sideremoving biases or being prepared for changeisnt seen as the primary objective.

Kasia believes that both biases in AI algorithms and an inability to adapt to change and crisis stem from the same problem and present an opportunity to build better technology in the future. She argues that by investing in building better algorithms, issues such as bias and an inability to predict user behavior in times of crisis can be eliminated.

Although companies might previously have been loath to invest time and money into building datasets that did much more than the minimum that they needed to operate, she hopes that the combination of COVID and an increased awareness of machine learning biases might be the push that they need.

I think that a lot of businesses that have seen their machine learning struggle will now think twice before they try and deploy a solution that isnt robust hasnt been tested enough, she says. Hopefully the failure of some AI systems will motivate data scientists as well as corporations to invest time and resources in the background work ahead of jumping into the development of AI solutions we will see more effort being put into ensuring that AI products are robust and bias-free.

The failures of AI have been undeniably problematic, but perhaps they present an opportunity to build a smarter future. After all, in recent months we have also seen the potential of AI, with new outbreak risk software and deep learning models that help the medical community to predict drugs and treatments and develop prototype vaccines. These strides in progress demonstrate the power of combining smart technology with human intervention, and show that with the right data AI has the power to enact massive positive change.

This year has revealed the full scope of AI, laying bare the challenges that developers face alongside the potential for tremendous benefits. Building datasets that encompass the broadest scope of human experience may be challenging, but it will also make machine learning more equitable, more useful, and much more powerful. Its an opportunity that those in the field should be keen to corner.

Go here to see the original:
How Coronavirus and Protests Broke Artificial Intelligence And Why Its A Good Thing - Observer

The 4th World Intelligence Congress Closed Online With Great Achievements – PRNewswire

At the closing ceremony, InferVision,Danish Carenborg Eco-Industrial Park, and Sino-Singapore Eco-City shared their development experiences, and Wei Ya, a popular Taobao livestream host,explained in detail the scientific and technological elements in "influencer marketing". Yu Lin, general manager of the strategic development department of Alibaba Group in Tianjin, introduced three modes of onlinepoverty alleviation. Liu Gang, deputy dean of Chinese Institute ofNew Generation Artificial Intelligence Development Strategies, released the Report on the Development of China's New Generation Artificial Intelligence Technology Industry (2020): The Development of China's New Generation Artificial Intelligence Technology Industry under New Challenges and Opportunities, and Yin Jihui, director of Tianjin Bureau of Industry and Information Technology, released the Annual Report on the Development of Tianjin Intelligent Technology Industry (2020), which pointed out the direction for the development trend of the artificial intelligence technology industry.

With the help of intelligent technology, this congress held six online events. The unique experience made people feel the charm of intelligent technology and deeply perceive Tianjin's past, present, and future in the field of intelligent technology. According to statistics, the congress released 26 achievements including reports, policies, and products to the world. Among them, the national ministries and commissions issued 12 achievements, including Talent Development Report of Artificial Intelligence Industry issued by the Ministry of Industry and Information Technology, smart travel products and solutions, and White Paper on Digital Health issued by the National Health Commission, etc. Tianjin released two achievements, namely, China's New Generation Artificial Intelligence Technology Industry Development Report (2020) and the Annual Report of Tianjin Intelligent Technology Industry Development (2020). Enterprises and districts in Tianjin released 12 achievements, including Galaxy Kylindesktop operating system V10 and advanced server operating system V10 released by KylinSoft, support policies of Tianjin Binhai New District, and Kunpeng Ecological Innovation Center of Huawei Company.

On the closing of the 4th World Intelligence Congress, Tianjin Municipal People's Government formally extends an invitation to industry leaders, world talents, and friends from all over the world. Welcome to attend the 5th World Intelligence Congress!

Contact: Cui KejiaTel: +86-400-019-0516, +86-15120084132E-mail: [emailprotected]

SOURCE The 4th World Intelligence Congress

https://www.wicongress.org/en

More here:
The 4th World Intelligence Congress Closed Online With Great Achievements - PRNewswire