Page 172«..1020..171172173174..180190..»

Category Archives: Ai

Smart city AI software revenue set to increase 700% by 2025 – SmartCitiesWorld

Posted: April 3, 2020 at 1:49 pm

The global smart city artificial intelligence (AI) software market is set to increase to $4.9 billion in 2025, up from $673.8 million in 2019, according to new analysis from analyst house Omdia. This represents a seven-fold rise.

As 4G and 5G are making it easier to collect and manage data, AI is enabling deeper analysis of that data. It can be used to automatically identify patterns or anomalies within data.

Video surveillance is a key area but the coronavirus pandemic could see a bigger focus on the use of AI to better co-ordinate public health responses, Omdia said.

From video surveillance to traffic control to street lighting, smart city use cases of all types are defined by the collection, management and usage of data, said Keith Kirkpatrick, principal analyst for AI, Omdia. However, until recently, connecting disparate components and systems together to work in concert has been challenging due to the lack of connectivity solutions that are fast, cost-effective, low latency and ubiquitous in coverage.

"These challenges now are being overcome by leveraging advances in AI and connectivity.

The Artificial Intelligence Applications for Smart Cities report notes that cities can use AI technologies such as machine learning, deep learning, computer vision and natural language processing to save money and deliver benefits to workers and visitors. These can include reduced crime, cleaner air and decreased congestion as well as more efficient government services.

Omdia highlights the example of using AI with video surveillance. When hosting public events, some cities are beginning to use video cameras that are linked to AI-based video analytics technology. AI algorithms scan the video and look for behavioural or situational anomalies that could indicate that a terrorist act or other outbreaks of violence may be about to occur.

Further, Omdia says cities are increasingly employing cloud-based AI systems that can search footage from most closed-circuit TV (CCTV) systems, allowing the platform and technology to be applied to existing camera infrastructure.

Video surveillance can also be combined with AI-based object detection to detect faces, gender, heights and even moods; read licence plates; and identify anomalies or potential threats, such as unattended packages.

"From video surveillance to traffic control to street lighting, smart-city use cases of all types are defined by the collection, management and usage of data."

As the use of surveillance cameras has exploded, AI-based video analytics now represent the only way to extract value in the form of insights, patterns, and action from the plethora of video data generated by smart cities, Omdias research note says.

Kirkpatrick told SmartCitiesWorld its too soon to say what the impact of coronavirus will be on smart city AI deployment and spending.

But, he said: My gut feeling is that AI programmes that are focused largely on efficiency, revenue generation and cost savings will remain in place. However, new initiatives or spending slated for 2021 or 2022 may get pushed back.

He added: If I had to pick an area that may see increasing spending, it would be around efforts to better co-ordinate public health response using AI, but that will largely depend on the municipality."

Omdia plans to revisit the forecast later in the year when there is more clarity on the financial impact of coronavirus.

The use of AI in cities also raises some concerns around privacy, bias, accuracy and possible manipulation.

Some cities are beginning to take steps to demonstrate oversight. In November, Singapore launched its first AI strategy. The City of New York is hiring an Algorithms Management and Policy Officer (AMPO), who will be responsible for ensuring AI tools used in decision-making are fair and transparent.

Omdia was established in February following the merger of the research division of Informa Tech (Ovum, Heavy Reading and Tractica) and the acquisition of the IHS Markit technology research portfolio.

You might also like:

Read more from the original source:

Smart city AI software revenue set to increase 700% by 2025 - SmartCitiesWorld

Posted in Ai | Comments Off on Smart city AI software revenue set to increase 700% by 2025 – SmartCitiesWorld

Q&A: Markus Buehler on setting coronavirus and AI-inspired proteins to music – MIT News

Posted: at 1:49 pm

The proteins that make up all living things are alive with music. Just ask Markus Buehler: The musician and MIT professor develops artificial intelligence models to design new proteins, sometimes by translating them into sound. His goal is to create new biological materials for sustainable, non-toxic applications. In a project with theMIT-IBM Watson AI Lab, Buehler is searching for a protein to extend the shelf-life of perishable food. In anew studyin Extreme Mechanics Letters, he and his colleagues offer a promising candidate: a silk protein made by honeybees for use in hive building.

Inanother recent study, in APL Bioengineering, he went a step further and used AI discover an entirely new protein. As both studies went to print, the Covid-19 outbreak was surging in the United States, and Buehler turned his attention to the spike protein of SARS-CoV-2, the appendage that makes the novel coronavirus so contagious. He and his colleagues are trying to unpack its vibrational properties through molecular-based sound spectra, which could hold one key to stopping the virus. Buehler recently sat down to discuss the art and science of his work.

Q:Your work focuses on the alpha helix proteins found in skin and hair. Why makes this protein so intriguing?

A: Proteins are the bricks and mortar that make up our cells, organs, and body. Alpha helix proteins are especially important. Their spring-like structure gives them elasticity and resilience, which is why skin, hair, feathers, hooves, and even cell membranes are so durable. But theyre not just tough mechanically, they have built-in antimicrobial properties. With IBM, were trying to harness this biochemical trait to create a protein coating that can slow the spoilage of quick-to-rot foods like strawberries.

Q:How did you enlist AI to produce this silk protein?

A:We trained a deep learning model on the Protein Data Bank, which contains the amino acid sequences and three-dimensional shapes of about 120,000 proteins. We then fed the model a snippet of an amino acid chain for honeybee silk and asked it to predict the proteins shape, atom-by-atom. We validated our work by synthesizing the protein for the first time in a lab a first step toward developing a thin antimicrobial, structurally-durable coating that can be applied to food. My colleague,Benedetto Marelli, specializes in this part of the process. We also used the platform to predict the structure of proteins that dont yet exist in nature. Thats how we designed our entirely new protein in the APL Bioengineering study.

Q: How does your model improve on other protein prediction methods?

A: We use end-to-end prediction. The model builds the proteins structure directly from its sequence, translating amino acid patterns into three-dimensional geometries. Its like translating a set of IKEA instructions into a built bookshelf, minus the frustration. Through this approach, the model effectively learns how to build a protein from the protein itself, via the language of its amino acids. Remarkably, our method can accurately predict protein structure without a template. It outperforms other folding methods and is significantly faster than physics-based modeling. Because the Protein Data Bank is limited to proteins found in nature, we needed a way to visualize new structures to make new proteins from scratch.

Q: How could the model be used to design an actual protein?

A: We can build atom-by-atom models for sequences found in nature that havent yet been studied, as we did in the APL Bioengineering study using a different method. We can visualize the proteins structure and use other computational methods to assess its function by analyzing its stablity and the other proteins it binds to in cells. Our model could be used in drug design or to interfere with protein-mediated biochemical pathways in infectious disease.

Q:Whats the benefit of translating proteins into sound?

A: Our brains are great at processing sound! In one sweep, our ears pick up all of its hierarchical features: pitch, timbre, volume, melody, rhythm, and chords. We would need a high-powered microscope to see the equivalent detail in an image, and we could never see it all at once. Sound is such an elegant way to access the information stored in a protein.

Typically, sound is made from vibrating a material, like a guitar string, and music is made by arranging sounds in hierarchical patterns. With AI we can combine these concepts, and use molecular vibrations and neural networks to construct new musical forms. Weve been working on methods to turn protein structures into audible representations, and translate these representations into new materials.

Q: What can the sonification of SARS-CoV-2's "spike" protein tell us?

A: Its protein spikecontains three protein chains folded into an intriguing pattern. These structures are too small for the eye to see, but they can be heard. We represented the physical protein structure, with its entangled chains, as interwoven melodies that form a multi-layered composition. The spike proteins amino acid sequence, its secondary structure patterns, and its intricate three-dimensional folds are all featured. The resulting piece is a form of counterpoint music, in which notes are played against notes. Like a symphony, the musical patterns reflect the proteins intersecting geometry realized by materializing its DNA code.

Q: What did you learn?

A: The virus has an uncanny ability to deceive and exploit the host for its own multiplication. Its genome hijacks the host cells protein manufacturing machinery, and forces it to replicate the viral genome and produce viral proteins to make new viruses. As you listen, you may be surprised by the pleasant, even relaxing, tone of the music. But it tricks our ear in the same way the virus tricks our cells. Its an invader disguised as a friendly visitor. Through music, we can see the SARS-CoV-2 spike from a new angle, and appreciate the urgent need to learn the language of proteins.

Q: Can any of this address Covid-19, and the virus that causes it?

A:In the longer term, yes. Translating proteins into sound gives scientists another tool to understand and design proteins. Even a small mutation can limit or enhance the pathogenic power of SARS-CoV-2. Through sonification, we can also compare the biochemical processes of its spike protein with previous coronaviruses, like SARS or MERS.

In the music we created, we analyzed the vibrational structure of the spike protein that infects the host. Understanding these vibrational patterns is critical for drug design and much more. Vibrations may change as temperatures warm, for example, and they may also tell us why the SARS-CoV-2 spike gravitates toward human cells more than other viruses. Were exploring these questions in current, ongoing research with my graduate students.

We might also use a compositional approach to design drugs to attack the virus. We could search for a new protein that matches the melody and rhythm of an antibody capable of binding to the spike protein, interfering with its ability to infect.

Q: How can music aid protein design?

A: You can think of music as an algorithmic reflection of structure. Bachs Goldberg Variations, for example, are a brilliant realization of counterpoint, a principle weve also found in proteins. We can now hear this concept as nature composed it, and compare it to ideas in our imagination, or use AI to speak the language of protein design and let it imagine new structures. We believe that the analysis of sound and music can help us understand the material world better. Artistic expression is, after all, just a model of the world within us and around us.

Co-authors of the study in Extreme Mechanics Letters are: Zhao Qin, Hui Sun, Eugene Lim and Benedetto Marelli at MIT; and Lingfei Wu, Siyu Huo, Tengfei Ma and Pin-Yu Chen at IBM Research. Co-author of the study in APL Bioengineering is Chi-Hua Yu. Buehlers sonification work is supported by MITs Center for Art, Science and Technology (CAST) and the Mellon Foundation.

Read this article:

Q&A: Markus Buehler on setting coronavirus and AI-inspired proteins to music - MIT News

Posted in Ai | Comments Off on Q&A: Markus Buehler on setting coronavirus and AI-inspired proteins to music – MIT News

2021.AI Opens up the Grace Data and AI Platform to Accelerate the Response to COVID-19 – AiThority

Posted: at 1:49 pm

In the Wake of the Global Spread of COVID-19, It Is Important More Than Ever Before to Develop Solutions to Fight the Virus and Related Crisis Problems. to Foster Collaboration and Share Knowledge, 2021.AI Now Offers Free Access to the Data- and AI Platform, Grace.

An increasing number of people across multiple types of research institutions and companies are leveraging data-driven approaches to tackle the different problems under the COVID-19 outbreak. While many researchers and companies are working on individual projects and models, there is a tremendous potential in uniting efforts by sharing results to accelerate and improve findings, and ultimately achieve higher efficiency and better results. Collaboration and joint contributions are so much more powerful.

2021.AI will offer a collaborative AI platform to assist in fighting the COVID-19 crisis. By accelerating collaboration across communities, such as academic institutions, governmental institutions, and companies, 2021.AI offers free access to the Grace AI Platform, preloaded with a range of public data sets and standard AI models. By opening up the platform for collaboration, more people will have the opportunity to contribute and directly address COVID-19 related problems.

Recommended AI News: AiThority Interview with Jeff Elton, CEO at Concerto HealthAI

In addition to the Grace AI Platform, 2021.AI will offer access and free data science expertise, training, and support for AI model development. This will be delivered together with Neural, a Danish data science organization, who are joining this initiative with their members cross-disciplinary competencies within data science and medicine.

2021.AI believes that almost all data research and development efforts can be much more efficient when supported by data sciences expertise to substantially accelerate the development of new data, AI solutions and innovative thinking.

We have a clear ambition with this project, which is to contribute directly with our assets and resources to assist in fighting the COVID-19 crisis. Our Grace Platform and data science expertise can both, directly and indirectly, impact COVID-19 related projects, supporting as many as possible with the new insights and solutions. The keywords here are cross-disciplinary collaboration and joint contributions making all contributors more powerful and efficient in tackling COVID-19 crisis projects, says Mikael Munck, Founder and CEO at 2021.AI.

Recommended AI News: Chainalysis And Paxful Create New Compliance Standard For Peer-To-Peer Cryptocurrency Exchanges

HOW CAN DATA SCIENCE AND AI HELP TACKLE COVID-19 PROBLEMS?

There are many relevant use cases where advanced Data Sciences and AI can contribute to tackle COVID-19 problems and challenges. A few examples include:

The illustration hereunder is the Epidemic Calculator integrated in the Grace AI platform as one example of standard tools now embedded into the Grace platform.

The above are a few examples of opportunities to be explored. The Grace AI Platform will also contain standard predictive AI models, i.e. on how each country will be affected in the future, these models can then be integrated and linked to external BI Platforms or other systems.

The Grace Platform efficiently facilitates versatile and flexible collaboration, while also providing Data- and AI governance, including Audit Trail and documentation along with validation of model design principles, and internal model design parameters. Should participants in this initiative require specific AI Governance support to validate that scientific methods are trustworthy and reliable, such additional support will be individually evaluated by 2021.AI.

Recommended AI News: Mark Zuckerberg vs Jack Dorsey A War About Freedom, Politics, and Cryptocurrency

View post:

2021.AI Opens up the Grace Data and AI Platform to Accelerate the Response to COVID-19 - AiThority

Posted in Ai | Comments Off on 2021.AI Opens up the Grace Data and AI Platform to Accelerate the Response to COVID-19 – AiThority

Artificial intelligence and construction’s weak margins: learning the lesson of chess – Innovation – GCR

Posted: at 1:49 pm

In 1968, the English international chess master David Levy made a bet with artificial intelligence (AI) pioneer John McCarthy that a computer would not be able to beat him within 10 years.

10 years later on the eve of the expiry period, in 1978, Levy sat down to a six-game match against Chess 4.7, a leading program developed by Northwestern University in the US. He won the match, and so won the bet, but the computer did defeat him in game four, marking the first computer victory against a human master in a tournament.

Fast forward to 1997, world Chess Grand Master Garry Kasparov lost an entire match to IBMs Deep Blue, which heralded a new era in computing. Today computers regularly beat humans, not only at chess but other games such as Go and even recently poker, highlighting the growing advancement in their human cognition capability.

Data analytics, artificial intelligence (AI) and machine learning (an application of AI), which is the ability of a computer to learn from data and make decisions, offer the potential of a new era in construction thinking and optimisation.

GIGO no longer applies

You may have heard the adage, garbage in, garbage out, or GIGO, meaning the output of a computer system is only as good as the quality of data fed into it.

When considering garbage data people were not referring to the fact that it was irrelevant, just data that had been captured in an unstructured way, making it useless for the purposes of retrieval, reporting and analysis.

But GIGO no longer applies in all instances, thanks to database technology and AI.

Now, data once classed as garbage which even today might include texts, emails and PDFs, to name a few can be captured in data lakes, vast repositories of unstructured data, and combined with structured data sources to become powerful information ecosystems.

Machine-learning based AI tools can be used to interrogate the data, looking not only for connections and patterns but also the meaning and sentiment, which was once classed as a purely human function.

Add to this the reality that the data can be analysed in significantly greater quantities, faster and more powerfully than humans could ever dream of, and we have a new, game-changing capability.

What this means for construction

For construction, that will mean the ability to look at all data and consider millions of permutations and combinations of ways of designing, planning, scheduling or managing a project.

Contractors have always run scenarios to find more productive alternatives. But the number of scenarios you can accurately consider have been limited by time, by the capacity of the human mind, and by the limitations of GIGO-era computing.

People tend to believe they can arrive at a better solution than a computer, and they probably can, if they have all the information and unlimited time. But thats the rub. We dont really have all the information, and we certainly do not have unlimited time.

You might reasonably have the time to consider 10, 20 or even 30 different scenarios but, unless you want to spend thousands of person-hours, at some point you have to get on with it, relying on assumptions based on the information you have, and what you believe has worked before.

What if, however, what you think worked before is based on imperfect data and therefore incorrect assumptions? What you know is probably only the tip of an iceberg in comparison to what there is to know.

Robert Brown is Group Chief Executive Officer of COINS

With AI you can examine significantly larger datasets and look at hundreds of thousands, if not millions, of permutations, considering the impact of factors and events, which are not possible to process with the human brain.

All data and the best people at your fingertips

Contractors sit on treasure troves of data, but the data are marooned in inaccessible islands: spreadsheets, historical project databases, project management software, financial software, emails, texts, PDFs, and so on.

There are other datasets that may have impacted on a project, but are not available for analysis in the GIGO era.

Imagine being able to look back at all data from all road-building projects and see what the impact was from factors such as labour availability, sickness, holidays, weather, financial results, economic conditions, planning regulations, exchange rates, interest rates, tax schemes and the performance of clients, material providers, and supply-chain partners.

By combining all those data, and using AI to spot trends, patterns and correlations, and running almost unlimited what-if scenarios, you could have much more information regarding the best way to bid for, structure, finance, plan, resource and schedule a project.

You would have much greater clarity, based on the conditions and circumstances that actually exist.

These patterns and trends become predictive tools, allowing us to move beyond assumption and gut-feel to better discover in what circumstances projects thrived or, conversely, under-performed, so that we can optimise plans and mitigate risks.

If this feels a little frightening, look at it this way: it would like having all the knowledge and experience of all of the best people youve ever worked with at your fingertips when making the next decision.

Pay heed to the heart attacks

Contractors work on unbelievably fine margins and take on huge amounts of risk. We should treat the fall of Carillion and repeated profit warnings from our biggest firms like mild heart attacks, warning us that we need to change.

Much of the current pain is down to the way the industry is structured and commercially managed. This is something we cant escape and which AI on its own wont necessarily change, but it can inform so that we are able to challenge our preconceived view of the world and possibly see it differently.

AI and machine learning applied to the analysis of data will lead to the discovery of approaches none of us thought possible before, opening up new and innovative ways of doing things that will reduce, cost, risk and increase margins.

In construction, even a small improvement on margin gained by managing a project a little differently, in a way you wouldnt normally have thought of, is worth it. It is the contractors life-blood. Collectively, these incremental gains add up to a winning difference.

Construction is now in a 1978 chess game, but with the capability of 2020

The technologies and techniques are starting to appear, and now is the time for contractors to get curious, challenge the status quo and begin to open their minds to new possibilities.

This is where David Levy was in 1978, after losing his first game to a computer.

Construction industry productivity is amongst the lowest of any industry sector, and its also at the bottom of the league when it comes to investing in technology. I dont believe it can be a complete coincidence that the most productive global sectors habitually embrace new technology.

Disruption is coming and there will be winners and losers. The question for larger companies is not Can you afford to do it?, but rather, Can you afford not to?.

By all means, start small, try it internally, on a part of the business that you know needs to improve, and test the results.

By doing this you will start developing a kernel of expertise in the organisation, so that youre ready to move when the wave rolls in. This is no longer bleeding edge but leading edge, and the winners of the future are already embracing these new technologies.

David Levy got it. After losing the game to Chess 4.7, he wrote: I had proved that my 1968 assessment had been correct, but on the other hand my opponent in this match was very, very much stronger than I had thought possible when I started the bet.

Levy went on to offer $1,000 to the developers of a chess program that could beat him in a match. He lost that money in 1989.

Top image: A Mephisto Mythos chess computer, circa 1995 (Morn/CC BY-SA 4.0)

Go here to see the original:

Artificial intelligence and construction's weak margins: learning the lesson of chess - Innovation - GCR

Posted in Ai | Comments Off on Artificial intelligence and construction’s weak margins: learning the lesson of chess – Innovation – GCR

Huawei Atlas 900 AI Cluster Wins the Red Dot Award 2020 – Yahoo Finance

Posted: at 1:49 pm

ESSEN, Germany, April 3, 2020 /PRNewswire/ -- The Huawei Atlas 900 AI cluster is the winner of the Red Dot Award 2020, standing out from thousands of entries to clinch the prize. Reviewed by a professional jury panel, the Atlas 900 AI cluster is recognized for its sharp design and groundbreaking innovation. After the Atlas 300 and Atlas 500, Atlas 900 becomes the third member of the Huawei Atlas family to be honored by the Red Dot Award. The awards are a hallmark of unparalleled quality and design for the Huawei Atlas products.

Huawei Atlas 900 AI cluster

The Atlas 900 AI cluster set a new benchmark with its top cluster network, modular deployment, heat dissipation system, holistic design, performance, extensibility, and human-centric details.

The Red Dot Award is one of the world's most prestigious awards for industrial design. This award is just the latest of a list of honors attached to the Atlas family. Other appraisals include the GSMA GLOMO Awards 2020, where the Atlas 900 was awarded the Tech of the Future Award in recent months.

About Huawei

Huawei is a leading global provider of information and communications technology (ICT) infrastructure and smart devices. With integrated solutions across four key domains- telecom networks, IT, smart devices, and cloud services- we are committed to bringing digital to every person, home and organization for a fully connected, intelligent world.

Huawei's end-to-end portfolio of products, solutions and services are both competitive and secure. Through open collaboration with ecosystem partners, we create lasting value for our customers, working to empower people, enrich home life, and inspire innovation in organizations of all shapes and sizes.

At Huawei, innovation focuses on customer needs. We invest heavily in basic research, concentrating on technological breakthroughs that drive the world forward. We have more than 194,000 employees, and we operate in more than 170 countries and regions. Founded in 1987, Huawei is a private company wholly owned by its employees. For more information, please visit Huawei online at http://www.huawei.com or follow us on:

http://www.linkedin.com/company/Huaweihttp://www.twitter.com/Huaweihttp://www.facebook.com/Huaweihttp://www.youtube.com/Huawei

About the Red Dot Award

The Red Dot Award is an internationally recognized seal for the best-of-breed design quality. It is awarded by the Design Zentrum Nordrhein Westfalen in Essen, Germany. The Red Dot Award, together with the IF Award of Germany and the IDEA Award of the U.S., are the top design awards in the world. Every year, tens of thousands of entries contend for the Red Dot Award. Only products of unbeatable innovation, usability, and user experience are recognized.

Photo - https://photos.prnasia.com/prnh/20200403/2768577-1

SOURCE Huawei

Follow this link:

Huawei Atlas 900 AI Cluster Wins the Red Dot Award 2020 - Yahoo Finance

Posted in Ai | Comments Off on Huawei Atlas 900 AI Cluster Wins the Red Dot Award 2020 – Yahoo Finance

VOS Digital Media Group Invests in Advanced AI in Breaking News, Sports, Natural Disaster Awareness and E-Commerce – Benzinga

Posted: at 1:49 pm

Natural Disaster AI Technology to Provide Geo-Specific Breaking News Integrated with Wellness, Healthcare and First Responder Alerts (COVID-19 and other Natural Disaster Emergencies)

New York, NY, April 03, 2020 --(PR.com)-- VOS Digital Media Group, Inc., a leading technology media company, today announced investment and launch of industry-leading artificial intelligence solutions to complement its digital media technology platform. VOS AI-enhanced solutions will initially be available to subscribers, telcos and media partners in the United States, Canada, and Latin America across web, mobile, and OTT devices.

With media technology solutions that are already providing some of the industrys fastest, most accurate data and content feeds featuring partners from around the world, the addition of custom AI solutions will give VOS the ability to deliver some of the most relevant and fastest-breaking news, sports and natural disaster information to consumers and businesses.

Were investing in powerful ground breaking AI technology and we plan to accelerate its use for insightful and personalized client experiences, to enhance our client partnerships and to accelerate hyper-targeted advertising, video consumption, e-commerce and analytics globally, stated Paul Feller, Chairman and Chief Executive Officer for VOS Digital Media Group. Our technology team is currently working with what we believe is the most advanced technologist in this sector to integrate new AI that will allow VOS to provide the fastest access to verified breaking news globally.

We will be wrapping sophisticated content flows originating from machine learning models into timely, relevant and impactful consumer experiences, stated Julio Hernandez-Miyares, Chief Technical Officer for VOS Digital Media Group. For partners requiring personalized content experiences across multiple languages and geographies both globally and throughout LATAM and North America, were developing AI-powered products and services that will be relevant to businesses, content sellers and creators, and end consumers alike.

Whether its the latest in sports scores from the worlds top leagues or coronavirus advisories from relevant medical experts, the immediacy and hyper-localization of VOS products gives partners an unparalleled ability to provide personalized content solutions to regional, national, or even global audiences.

About VOS Digital Media GroupVOS is a global digital video exchange and technology platform providing a seamless process for bringing together content creators and media companies. We specialize in providing and maintaining content sales and sourcing scalability, reducing labor and editorial costs, eliminating errors in metadata assignment and extraction, and drastically decreasing the time to market for both video creators and buyers. https://www.vosdmg.com

Contact Information:VOS Digital Media GroupChristopher Stankiewicz347-620-9272Contact via Emailwww.vosdmg.com

Read the full story here: https://www.pr.com/press-release/809495

Press Release Distributed by PR.com

Visit link:

VOS Digital Media Group Invests in Advanced AI in Breaking News, Sports, Natural Disaster Awareness and E-Commerce - Benzinga

Posted in Ai | Comments Off on VOS Digital Media Group Invests in Advanced AI in Breaking News, Sports, Natural Disaster Awareness and E-Commerce – Benzinga

The value of AI in times of crisis Media News – Media Update

Posted: at 1:49 pm

While its safe to say that no one saw this pandemic coming, its also safe to say that some businesses have been better prepared than others. Companies that survive every great recession, calamity or pandemic all have one thing in common: They are focused on innovation and preparing for the future always.

Technology is already at the forefront of helping businesses to manage disruptions video conferencing applications like Skype and collaborative tools like Trello are allowing people to work together while physically being apart. But, for smart businesses, technology is doing much more than that.

Newsclip has always been one step ahead of the rest, and thanks to that, it is now reaping the rewards of many years of hard work. Having spent over 10 years researching, testing and developing advanced AI-powered systems, it is finally getting a chance to really see what its systems are capable of.

This is where the AI-powered tech comes in!

Peoples expectations have become now. If they dont get a quick response from a company on social media, or if the load time of a website is longer than three seconds, they move on, MD Simon Dabbs explains.

Its our responsibility to provide our clients with cutting-edge, modern ways of managing vast arrays of information. Clients need to make informed decisions as quickly as possible and Newsclip provides them with the intelligence to do so.

The companys AI technology combines natural language processing (NLP) and machine learning into its Data Engine. NLP allows the system to interpret human language, while machine learning enables it to recognise patterns within data.

The fact that all of the brands systems are AI-integrated means that it can still continue to offer clients the same level of service, even though all of its staff is now working from home.

All of the client login portals have been developed as Progressive Web Applications (PWAs), meaning that they can be accessed from any device with the exact same navigation. Theres no limited-functionality on the mobile or tablet versions they are just as responsive as the full, desktop versions.

Its likely that the business landscape across the world will never be the same again, but those brands that have made innovation a part of their day-to-day will be the ones leading the way forward.

In-house technology development has huge financial and administrative overheads, says Dabbs. However, the long-term business stability and cutting-edge development that comes with this is not only rewarding, but invaluable.

For more information, visitwww.newsclip.co.za. You can also follow Newsclip on Facebook, LinkedIn or on Twitter.

Continue reading here:

The value of AI in times of crisis Media News - Media Update

Posted in Ai | Comments Off on The value of AI in times of crisis Media News – Media Update

UK-US Initiative to Screen Drugs Using AI for Coronavirus… – Labiotech.eu

Posted: at 1:49 pm

The UK company Exscientia will use its AI-driven drug discovery platform to examine a collection of 15,000 potential coronavirus disease treatments in collaboration with the US research institute Calibr and the non-profit synchrotron company Diamond Light Source.

The teams huge collection of drug molecules will be provided by Calibr, part of the US medical institute Scripps Research. Diamond Light Source will use its facilities to examine protein structure and replicate essential viral proteins for experimentation.

The Bill and Melinda Gates Foundation funded the collection of these drug candidates, which includes nearly every known drug that has been approved or extensively tested for safety but not yet approved for therapeutic use.

The collection will be shipped from Scripps Research in California to Oxford where Exscientia and Diamond Light Source can work together to screen and test the collection as well as modifications of the drug candidates against key viral proteins.

Diamond began studying the novel coronavirus SARS-CoV-2 shortly after the outbreak, using its expertise in crystallography to identify the structure of viral protein targets and find potential therapeutic sites.

We saw an opportunity to use our expertise in super high-throughput drug binding experiments, David Owen, Doctoral Research Associate at Diamond, told me.We were also able to solve the [protein] structure at a very high resolution. This will provide the chemists of the world with extra information about the different potential drug binding sites.

For this, the team used its synchrotron device, a machine that produces a high-energy electron beam, in addition to electron microscopy to visualize the drug binding sites on viral proteins.

For the time being, all of the Diamond beamlines will be focused on Covid-19 work because we want to be able to do the most valuable work with the fewest possible staff, Owen continued. We will run the beamlines for as long as there are samples to put on them.

Diamond has identified three key protein targets 3CL protease, RNA Polymerase, and SPIKE protein that will help inform Exscientias AI-driven drug screening technology.

This isnt the first project to leverage AI in drug discovery during the coronavirus crisis. The Cambridge firm BenevolentAI identified a potential coronavirus treatment with AI in March, while the German company Innoplexus announced last weekthat its AI platform identified potential drugs and drug combinations that may treat Covid-19 patients using published data and information on already approved therapeutics.

Exscientias approach differs significantly. Firstly, the data will include far more than approved drugs, but also drugs that were at different stages of clinical testing. Should a candidate be identified, its approval for treating Covid-19 could be accelerated depending on how far it has already come through the pipeline.

Andrew Hopkins, CEO of Excientia, told me of another major difference: experimentation.

A key difference as well is that we are generating brand new data, because we will be testing all of the drugs against these three targets, Hopkins explained.

The Innoplexus approach, I believe, is mining existing literature to make connections and what were doing is generating brand new data as well which, if were fortunate, could be a source or potentially discovering a drug for repurposing directly from that work. It will also give us data to drive our machine learning models as well.

Even before turning its platform to the fight against coronavirus, Exscientia had made a name for itself in the AI-driven drug discovery space. Last year, the firm signed the largest AI-based drug discovery deal at the time with Celgene for cancer and autoimmune indications and in January, it became the first company to get an AI-designed drug into clinical trials.

Images from Shutterstock

More here:

UK-US Initiative to Screen Drugs Using AI for Coronavirus... - Labiotech.eu

Posted in Ai | Comments Off on UK-US Initiative to Screen Drugs Using AI for Coronavirus… – Labiotech.eu

At Stanford’s AI Conference, Harnessing Tech to Fight COVID-19 – ExtremeTech

Posted: at 1:49 pm

This site may earn affiliate commissions from the links on this page. Terms of use.

As another sign of the times, Stanford repurposed its planned Human-Centered AI (HAI) Conference into a digital-only, publicly accessible symposium on how technology has been and can be employed in fighting the spread and assisting in the treatment of COVID-19. We heard from researchers, doctors, statisticians, AI developers, and policymakers about a wide variety of strategies and solutions. Some of them have been working on this problem for a long time, some have quickly re-purposed their flu research, and others have shifted entirely from what they were doing before because of the urgency of this crisis.

For public officials trying to assess how various interventions will affect the spread of COVID-19, and the impact it will have on health infrastructure, or just for curious individuals who want to get more information than is provided in often confusing national briefings, Stanfords SURF (Systems Utilization Research for Stanford Medicine) gives you a way to experiment with various values for the spread of the disease and predicted effectiveness of possible interventions and look at how that will affect how many will become ill, and how severely. The tool is pre-loaded with current case numbers by county throughout the US.

From this graphic, you can see the chronology of how the virus spread around the world.

One of the most impressive aspects of the HAI event was the amazing number of non-profit research efforts made possible by scientists dedicated to improving public health. One of those is Nextstrain.org. The group provides an open-source toolkit for bioinformatics and collects data created with it to provide visualizations of various aspects of a variety of pathogens, now including the novel coronavirus. The featured image for this story is a genetic family tree of 2499 samples from around the world. You can visit the site and even see an animation of how the virus must have spread based on how its genome mutated.

While mainland China stumbled badly in its initial response to COVID-19, and we in the US clearly acted much too slowly to nip it in the proverbial bud, a few countries, including Singapore and Taiwan, have done a particularly effective job of preventing the pandemic from ravaging their population. A number of their strategies have been widely reported, but there are also several very interesting applications of technology used in those countries that were covered at the HAI conference.

Stanford & Woods Institutes Michele Barry told us about a clever mobile app, TraceTogether, that has been widely deployed in Singapore. It uses a combination of location history and current Bluetooth proximity to not only let you know whether you are near someone who has tested positive for the virus, but alert you in the event that someone you have been near in the last couple weeks is now testing positive. Obviously this involves sharing a lot of information, which would face plenty of legal and social challenges in the US or most other countries. But it has proven very effective in slowing the spread of the disease. The same is true of the mandatory location tracking implemented for those coming into the country with any symptoms.

Chinese State media and US mainstream media show different perspectives in their coverage. Courtesy of Stanford Cyber Policy Center.

Similarly, Taiwan implemented an extensive testing and mandatory quarantine of symptomatic individuals. Incoming flights were boarded and temperatures were taken, for example. Those with fevers found on planes or when entering public buildings were placed in quarantine, brought food, and paid a salary. Passenger travel databases were also connected to the national health database, so it was possible to alert those who had been near an infected individual so that they could get tested. It also meant that anytime anyone visited a doctor, the physician would know in advance if they were at high risk of being exposed and should therefore take precautions. Real-time mask availability maps were made available online in Taiwan, which worked because after their 2003 experience the country acted early to ramp up mask production so that there were enough for everyone to use one all the time.

One striking number from mainland China is that they sent 15,000 epidemiologists to Hubei Province once they decided to deal with the outbreak head-on thats twice as many as we have total in the United States.

Several of the speakers addressed the manifold issues with a large amount of often contradictory information, along with misinformation and disinformation, that is bombarding people worldwide. The specifics of the problem vary greatly by country and by demographic. In some countries like China, information tends to come top-down and be heavily filtered, so the problem becomes finding additional sources of information. In countries like the US, the problem can be the opposite, where there are far too many sources of information, many of which arent reliable or are deliberately spreading false information. But even here, politicization and factionalization have meant that reliable sources of information can be hard to come by.

HealthMap has added COVID-19 tracking to its existing crowdsourced flu-tracking capability.

One place where all the speakers were in agreement is that increased data literacy and critical thinking are key skills for individuals wanting to understand what is happening and have an informed perspective on how they should act, and how they should encourage others to act. In terms of data literacy, two concepts that are now front and center are dealing with the implications of exponential growth, and of interpreting margins of error in forecasts. Anyone trained in science, engineering, or math may be familiar with them, but it is clear many individuals including many of our policy-making public officials arent. As far as critical thinking, checking sources and putting data in context is more important than ever given the large amount of rapidly evolving data being produced on this topic. Even within the research community, the urgency to get research published is causing a lot of early printing of papers and rushed studies with limited datasets.

Weve only covered a few of the highlights of Stanfords HAI event in this article. There was also an entire technical session on tactics for developing drugs, and several excellent talks on telemedicine and using AI for eldercare. For those of you who are involved with machine learning, Kaggles Anthony Goldbloom gave a great description of how the platform is being deployed to assist, and how individuals can get involved. Harvards John Brownstein also showed off some of their impressive crowdsource data that populates healthmap.org. A few of the full talks are already online on the event web site, and more are being added as they are made available.

Now Read:

See more here:

At Stanford's AI Conference, Harnessing Tech to Fight COVID-19 - ExtremeTech

Posted in Ai | Comments Off on At Stanford’s AI Conference, Harnessing Tech to Fight COVID-19 – ExtremeTech

Select the Right Flash Memory for Your Battery-Powered AI Speaker with Voice Control – Electronic Design

Posted: at 1:49 pm

Series: The JESD204 Story

A new converter interface is steadily picking up steam and looks to become the preferred protocol for future converters. This new interfaceJESD204was originally rolled out several years ago, but it has undergone revisions that are making it a much more attractive and efficient converter interface.

The steadily increasing resolution and speed of converters has pushed demand for a more efficient interface. The JESD204 interface brings this efficiency and offers several advantages over its complementary metal-oxide semiconductor (CMOS) and low-voltage differential-signaling (LVDS) predecessors in terms of speed, size, and cost.

Designs employing JESD204 enjoy the benefits of a faster interface to keep pace with the faster sampling rates of converters. In addition, a reduction in pin count leads to smaller package sizes and a lower number of trace routes that make board designs much easier and offer lower overall system cost. The standard is also easily scalable so that it can be adapted to meet future needs. This has already been exhibited by the two revisions that the standard has undergone.

Since its introduction in 2006, the JESD204 standard has seen two revisions and is now at Revision B. As the standard has been adopted by an increasing number of converter vendors and users, as well as FPGA manufacturers, its been refined, and new features have been added that increased efficiency and ease of implementation. The standard applies to both analog-to-digital converters (ADCs)as well as digital-to-analog converters (DACs), and is primarily intended as a common interface to FPGAs (but may also be used with ASICs).

JESD204What Is It?

The original version of JESD204 was released in April 2006. The standard describes a multigigabit serial data link between converter(s) and a receiver, typically a device such as an FPGA or ASIC. In this original version of JESD204, the serial data link was defined for a single serial lane between a converter or multiple converters and a receiver (Fig. 1).1. A representation of the JESD204 original standard.

The lane shown is the physical interface between M number of converters and the receiver, which consists of a differential pair of interconnects utilizing current-mode-logic (CML) drivers and receivers. The link shown is the serialized data link thats established between the converter(s) and the receiver. The frame clock is routed to both the converter(s) and the receiver and provides the clock for the JESD204 link between the devices.

The lane data rate is defined between 312.5 Mb/s and 3.125 Gb/s, with both source and load impedance defined as 100 20%. The differential voltage level is defined as being nominally 800 mV p-p with a common-mode voltage-level range from 0.72 to 1.23 V. The link utilizes 8b/10b encoding that incorporates an embedded clock, removing the necessity for routing an additional clock line and the associated complexity of aligning an additional clock signal with the transmitted data at high data rates.

It became obvious, as the JESD204 standard began gaining popularity, that the standard needed to be revised to incorporate support for multiple aligned serial lanes with multiple converters. This would accommodate the increasing speeds and resolutions of converters.

This realization led to the first revision of the JESD204 standard, which became known as JESD204A. This revision of the standard added the ability to support multiple aligned serial lanes with multiple converters. The lane data rates, supporting from 312.5 Mb/s up to 3.125 Gb/s, remained unchanged as did the frame clock and the electrical interface specifications.

Increasing the capabilities of the standard to support multiple aligned serial lanes made it possible for converters with high sample rates and high resolutions to meet the maximum supported data rate of 3.125 Gb/s. Figure 2 shows a graphical representation of the additional capabilities added in the JESD204A revision to support multiple lanes.

2. JESD204Athe first version of JESD204.

Although both the original JESD204 standard and revised JESD204A standard were higher performance than legacy interfaces, they still lacked a key element. This missing element was deterministic latency in the serialized data on the link. When dealing with a converter, its important to know the timing relationship between the sampled signal and its digital representation. Its then possible to properly recreate the sampled signal in the analog domain once the signal has been received (this situation is, of course, for an ADC; a similar situation is true for a DAC).

This timing relationship is affected by the latency of the converter, which is defined for an ADC as the number of clock cycles between the instant of the sampling edge of the input signal until the time that its digital representation is present at the converters outputs. Similarly, in a DAC, the latency is defined as the number of clock cycles between the time the digital signal is clocked into the DAC until the analog output begins changing.

In the JESD204 and JESD204A standards, there were no defined capabilities that would deterministically set the latency of the converter and its serialized digital inputs/outputs. In addition, converters were continuing to increase in both speed and resolution. These factors led to the introduction of the second revision of the standardJESD204B.

The Arrival of JESD204B

In July of 2011, the second and current revision of the standard, JESD204B, was released. One of the key components of the revised standard was the addition of provisions to achieve deterministic latency. In addition, the data rates supported were pushed up to 12.5 Gb/s, broken down into different speed grades of devices. This revision of the standard calls for the transition from using the frame clock to using the device clock as the main clock source. Figure 3 gives a representation of the additional capabilities added by the JESD204B revision.

3. Second and current revision is JESD204B.

In the previous two versions of the JESD204 standard no provisions were defined to ensure deterministic latency through the interface. The JESD204B revision remedies this issue by providing a mechanism to ensure that, from power-up cycle to power-up cycle and across link resynchronization events, the latency should be repeatable and deterministic.

One way to accomplish this is by initiating the initial lane-alignment sequence in the converter(s) simultaneously across all lanes at a well-defined moment in time by using an input signal called SYNC~. Another implementation is to use the SYSREF signal, which is a newly defined signal for JESD204B. The SYSREF signal acts as the master timing reference and aligns all of the internal dividers from device clocks as well as the local multiframe clocks in each transmitter and receiver. This helps to ensure deterministic latency through the system.

The JESD204B specification calls out three device subclasses: Subclass 0no support for deterministic latency; Subclass 1 deterministic latency using SYSREF; and Subclass 2deterministic latency using SYNC~. Subclass 0 can simply be compared to a JESD204A link. Subclass 1 is primarily intended for converters operating at or above 500 MSPS, while Subclass 2 is primarily for converters operating below 500 MSPS.

In addition to the deterministic latency, the JESD204B version increases the supported lane data rates to 12.5 Gb/s and divides devices into three different speed grades. The source and load impedance is the same for all three speed grades being defined as 100 20%.

The first speed grade aligns with the lane data rates from the JESD204 and JESD204A versions of the standard and defines the electrical interface for lane data rates up to 3.125 Gb/s. The second speed grade in JESD204B defines the electrical interface for lane data rates up to 6.375 Gb/s. This speed grade lowers the minimum differential voltage level to 400 mV p-p, down from 500 mV p-p for the first speed grade. The third speed grade in JESD204B defines the electrical interface for lane data rates up to 12.5 Gb/s. This speed grade lowers the minimum differential voltage level required for the electrical interface to 360 mV p-p. As the lane data rates increase for the speed grades, the minimum required differential voltage level is reduced to make physical implementation easier by reducing required slew rates in the drivers.

To allow for more flexibility, the JESD204B revision transitions from the frame clock to the device clock. Previously, in the JESD204 and JESD204A revisions, the frame clock was the absolute timing reference in the JESD204 system. Typically, the frame clock and the sampling clock of the converter(s) were the same. This didnt offer a lot of flexibility and could cause undesired complexity in system design when attempting to route this same signal to multiple devices and account for any skew between the different routing paths.

In JESD204B, the device clock is the timing reference for each element in the JESD204 system. Each converter and receiver is given its respective device clock from a clock generator circuit thats responsible for generating all device clocks from a common source. This allows for more flexibility in the system design, but requires that the relationship between the frame clock and device clock be specified for a given device.

JESD204Why We Should Pay Attention to It

In much the same way as LVDS began overtaking CMOS as the technology of choice for the converter digital interface several years ago, JESD204 is poised to tread a similar path in the next few years. While CMOS technology is still hanging around today, it has mostly been overtaken by LVDS. The speed and resolution of converters as well as the desire for lower power eventually renders CMOS and LVDS inadequate for converters. As the data rate increases on the CMOS outputs, the transient currents also increase and result in higher power consumption. While the current, and thus, power consumption, remains relatively flat for LVDS, the interface has an upper speed bound that it can support.

This is due to the driver architecture, as well as the numerous data lines that must all be synchronized to a data clock. Figure 4 illustrates the different power-consumption requirements of CMOS, LVDS, and CML outputs for a dual 14-bit ADC.

4. The graph compares CMOS, LVDS, and CML driver power.

At approximately 150 to 200 MSPS and 14 bits of resolution, CML output drivers start to become more efficient in terms of power consumption. Due to the serialization of the data, CML offers the advantage of requiring fewer output pairs per a given resolution than LVDS and CMOS drivers. The CML drivers specified for the JESD204B interface have an additional advantage since the specification calls for reduced peak-to-peak voltage levels as the sample rate increases and pushes up the output line rate.

The number of pins required for the same given converter resolution and sample rate is also considerably less. The table compares the pin counts for the three different interfaces using a 200-MSPS converter with various channel counts and bit resolutions. The data assumes a synchronization clock for each channels data in the case of the CMOS and LVDS outputs and a maximum data rate of 4.0 Gb/s for JESD204B data transfer using the CML outputs. The reasons for the progression to JESD204B using CML drivers become obvious when looking at the table and observing the dramatic reduction in pin count thats possible.

Analog Devices, a market leader in data converters, has seen the trend thats pushing the converter digital interface toward the JESD204 interface defined by JEDEC. The company has been involved with the standard from the beginning, when the first JESD204 specification was released. To date, Analog Devices has released several converters to production with JESD204- and JESD204A-compatible outputs and is currently developing products with outputs that are compatible with JESD204B.

For example, the AD9639 is a quad-channel, 12-bit, 170/210-MSPS ADC that has a JESD204 interface. The AD9644 and AD9641 are 14-bit, 80/155-MSPS dual and single ADCs that have the JESD204A interface. From the DAC side, the recently released AD9128 is a dual 16-bit, 1.25-GSPS DAC with a JESD204A interface. For more information on Analog Devices JESD204 efforts, visit analog.com/jesd204.

Summary

The increasing speed and resolution of converters has escalated the demand for a more efficient digital interface. The industry began realizing this with the JESD204 serialized data interface. The interface specification has continued to evolve to offer a better and faster way to transmit data between converters and FPGAs (or ASICs). The interface has undergone two revisions to improve upon its implementation and meet the increasing demands brought on by higher speeds and higher-resolution converters.

Looking to the future of converter digital interfaces, its clear that JESD204 is poised to become the industry choice for the digital interface to converters. Each revision has answered the demands for improvements on its implementation and has allowed the standard to evolve to meet new requirements brought on by changes in converter technology. As system designs become more complex and converter performance pushes higher, the JESD204 standard should be able to adapt and evolve to continue to meet the new design requirements necessary.

Jonathan Harris is a product applications engineer in the High Speed Converter Group at Analog Devices.

Series: The JESD204 Story

References

JEDEC Standard JESD204 (April 2006). JEDEC Solid State Technology Association.

JEDEC Standard JESD204A (April 2008). JEDEC Solid State Technology Association.

JEDEC Standard JESD204B (July 2011). JEDEC Solid State Technology Association.

Read the original post:

Select the Right Flash Memory for Your Battery-Powered AI Speaker with Voice Control - Electronic Design

Posted in Ai | Comments Off on Select the Right Flash Memory for Your Battery-Powered AI Speaker with Voice Control – Electronic Design

Page 172«..1020..171172173174..180190..»