SiFive and CEVA Partner to Bring Machine Learning Processors to Mainstream Markets – Design and Reuse

Joint silicon development through SiFive's DesignShare Program combines IP and design strengths of both companies to develop Edge AI SoCs for a range of high-volume end markets including smart home, automotive, robotics, security, augmented reality, industrial and IoT

SAN MATEO and MOUNTAIN VIEW, Calif., Jan. 7, 2020 -- SiFive, Inc., the leading provider of commercial RISC-V processor IP and silicon solutions and CEVA, Inc. (NASDAQ: CEVA), the leading licensor of wireless connectivity and smart sensing technologies, today announced a new partnership to enable the design and creation of ultra-low-power domain-specific Edge AI processors for a range of high-volume end markets. The partnership, as part of SiFive's DesignShare program, is centered around RISC-V CPUs, CEVA's DSP cores, AI processors and software, which will be designed into SoCs targeting an array of end markets where on-device neural networks inferencing supporting imaging, computer vision, speech recognition and sensor fusion applications is required. Initial end markets include smart home, automotive, robotics, security and surveillance, augmented reality, industrial and IoT.

Machine Learning Processing at the Edge

Domain-specific SoCs which can handle machine learning processing on-device are set to become mainstream, as the processing workloads of devices increasingly includes a mix of traditional software and efficient deep neural networks to maximize performance, battery life and to add new intelligent features. Cloud-based AI inference is not suitable for many of these devices due to security, privacy and latency concerns. SiFive and CEVA are directly addressing these challenges through the development of a range of domain-specific scalable edge AI processor designs, with the optimal balance of processing, power efficiency and cost.

The Edge AI SoCs are supported by CEVA's award-winning CDNN Deep Neural Network machine learning software compiler that creates fully-optimized runtime software for the CEVA-XM vision processors, CEVA-BX audio DSPs and NeuPro AI processors. Targeted for mass-market embedded devices, CDNN incorporates a broad range of network optimizations, advanced quantization algorithms, data flow management and fully-optimized compute CNN and RNN libraries into a holistic solution that enables cloud-trained AI models to be deployed on edge devices for inference processing. CEVA will also supply a full development platform for partners and developers based on the CEVA-XM and NeuPro architectures to enable the development of deep learning applications using the CDNN, targeting any advanced network, as well as DSP tools and libraries for audio and voice pre- and post-processing workloads.

SiFive DesignShare Program

The SiFive DesignShare IP program offers a streamlined process for companies seeking to partner with leading vendors to provide pre-integrated premium Silicon IP for bringing new SoCs to market. As part of SiFive's business model to license IP when ready for mass production, the flexibility and choice of the DesignShare IP program reduces the complexities of contract negotiation and licensing agreements to enable faster time to market through simpler prototyping, no legal red tape, and no upfront payment.

"CEVA's partnership with SiFive enables the creation of Edge AI SoCs that can be quickly and expertly tailored to the workloads, while also retaining the flexibility to support new innovations in machine learning," said Issachar Ohana, Executive Vice President, Worldwide Sales at CEVA. "Our market leading DSPs and AI processors, coupled with the CDNN machine learning software compiler, allow these AI SoCs to simplify the deployment of cloud-trained AI models in intelligent devices and provides a compelling offering for anyone looking to leverage the power of AI at the edge."

"Enabling future-proof, technology-leading processor designs is a key step in SiFive's mission to unlock technology roadmaps," said Dr. Naveed Sherwani, president and CEO, SiFive. "The rapid evolution of AI models combined with the requirements for low power, low latency, and high-performance demand a flexible and scalable approach to IP and SoC design that our joint CEVA / SiFive portfolio is superbly positioned to provide. The result is shorter time-to-market, while lowering the entry barriers for device manufacturers to create powerful, differentiated products."

Availability

SiFive's DesignShare program, including CEVA-BX Audio DSPs, CEVA-XM Vision DSPs and NeuPro AI processors, is available now. Visit http://www.sifive.com/designshare for more information.

About SiFive

SiFive is on a mission to free semiconductor roadmaps and declare silicon independence from the constraints of legacy ISAs and fragmented solutions. As the leading provider of market-ready processor core IP and silicon solutions based on the free and open RISC-V instruction set architecture SiFive helps SoC designers reduce time-to-market and realize cost savings with customized, open-architecture processor cores, and democratizes access to optimized silicon by enabling system designers in all markets to build customized RISC-V based semiconductors. Founded by the inventors of RISC-V, SiFive has 16 design centers worldwide, and has backing from Sutter Hill Ventures, Qualcomm Ventures, Spark Capital, Osage University Partners, Chengwei, Huami, SK Hynix, Intel Capital, and Western Digital. For more information, please visit http://www.sifive.com.

About CEVA, Inc.

CEVA is the leading licensor of wireless connectivity and smart sensing technologies. We offer Digital Signal Processors, AI processors, wireless platforms and complementary software for sensor fusion, image enhancement, computer vision, voice input and artificial intelligence, all of which are key enabling technologies for a smarter, connected world. We partner with semiconductor companies and OEMs worldwide to create power-efficient, intelligent and connected devices for a range of end markets, including mobile, consumer, automotive, robotics, industrial and IoT. Our ultra-low-power IPs include comprehensive DSP-based platforms for 5G baseband processing in mobile and infrastructure, advanced imaging and computer vision for any camera-enabled device and audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For sensor fusion, our Hillcrest Labs sensor processing technologies provide a broad range of sensor fusion software and IMU solutions for AR/VR, robotics, remote controls, and IoT. For artificial intelligence, we offer a family of AI processors capable of handling the complete gamut of neural network workloads, on-device. For wireless IoT, we offer the industry's most widely adopted IPs for Bluetooth (low energy and dual mode), Wi-Fi 4/5/6 (802.11n/ac/ax) and NB-IoT. Visit us at http://www.ceva-dsp.com

Read more:
SiFive and CEVA Partner to Bring Machine Learning Processors to Mainstream Markets - Design and Reuse

A digital and transformed future | Artificial intelligence supercharging other technology – Lexology

Transformative technology can be powerful not just in its own right, but where different technologies converge. Artificial intelligence, in particular, can be a technology supercharger. The second Insight in our series looking at the digital future (and adapted from an article written for the 2019 Bristol Technology Showcase) considers the transformative power of machine learning.

Artificial intelligence, in the form of machine learning or deep learning, relies on finding and mapping the patterns in data and then using more and more data to refine and deepen the accuracy of that model, without the need for human-generated linear hand-coding.

Part of the reason why this has become such a powerful tool is the speed and availability of almost limitless computing power, thanks to Moores law and the development of the cloud, respectively. By way of illustration of the current scale, availability and low cost of processing power, a group of computer scientists recently challenged themselves to break the World War II Enigma code using 21st century artificial intelligence. The point of interest is not that they succeeded, but that it took a mere 19 minutes to do so. It might have taken two weeks, but they hired 1000 servers for an hour at a cost of $7.

AI-driven generative design

A further example of the transformative power of AI is generative design. The design of pieces of kit, such as a bracket for interconnecting different parts or a structural panel in a vehicle, is being optimised using AI. Parameters concerning the structural properties of the piece can be set by the design engineers (for example, the required strength, tolerances, points of connection, areas of open space). The system will then devise numerous potential designs for the piece. To the human eye, generative design pieces often look almost other-worldly because they are so different to what a human mind might design.

The generative design tool can be configured to optimise different design characteristics. A particularly impactful application is to optimise for low weight. This is particularly significant for electric vehicle and aviation design: lower weight reduces the engine power necessary to move the vehicle or aircraft, making it more efficient.

plus additive manufacturing

Generative design is used in conjunction with additive manufacturing (a form of industrial-scale 3D printing), which makes it possible to produce these extraordinary new designs. The machines do not need physical retooling to switch to a new design, just a new digital file to drive the output. Small production runs are therefore viable, although additive manufacturing is also being used at scale. Moreover, there are material benefits from additive manufacturings ability to produce complex shapes in a single piece. Fewer joints makes the piece structurally stronger, more durable and at a lower risk of fracturing, all of which reduces the frequency of repairs.

plus image recognition-based

AI can also be used to train systems to recognise faults and errors in the layers of additive manufacturing. Normally, each layer of a 3D-printed product will be photographed as it is printed, and the photographs then subsequently reviewed for quality assurance. An AI image recognition tool, by contrast, can be trained to perform the QA checks and review for errors in real time as the printing machine builds up the layers. The printing process can be stopped if a fatal error is detected, reducing waste by not finishing a faulty product.

Letting robots find their own way

AI has also been used to boost physical robotics. Images of humanoid robots doing backflips or of headless quadruped robots opening doors are immensely impressive. These systems are hand coded, line by line, and take a great deal of time to program, which computing power does not, in itself, make faster. However, the ability of machine learning systems to meet a defined goal from scratch and without linear coding, is now being applied to robotics and is enabling machines to develop the coding needed for a particular task without human input, essentially by trial and error.

Machine learning has been used to work out how to use a robotic hand to manipulate a cube so that a particular face of the cube was selected. A digital model of the hand and cube was created, replicating in virtual form the characteristics and constraints of the physical robot hand and of the cube. The system was given definitions of success and of failure. It then tried the task repeatedly over a period of time until it succeeded in controlling the movements of the fingers and palm sufficiently to manipulate the required face of the cube into the required position.

The coding for the virtual hand was then transferred to control the physical version and the physical hand was able to manipulate the physical cube as required.

AI has been called software 2.0 for its ability to write itself in this way. Of course, considerable technical skill is needed for these types of project; machine learning typically has a PhD entry level. But increasingly, ready-made AI tools are available as a Service, including as one of the options in the portfolios of mix-and-match resources and software offered by many of the major cloud computing vendors.

A significant part of the transformative power of AI is this ability to supercharge other technologies. The development of AI tools that can be used off-the-shelf, without the need for highly specialised skills, will only amplify this effect.

Read the original post:
A digital and transformed future | Artificial intelligence supercharging other technology - Lexology

A real introduction to artificial intelligence – Harvard School of Engineering and Applied Sciences

If it hadnt been for a summer internship, Elizabeth Bondi probably wouldnt be where she is nowworking toward her Ph.D. in computer science at the Harvard John A. Paulson School of Engineering and Applied Sciences.

After her junior year in high school, Bondi spent a month designing and conducting eye-tracking experiments alongside a local college professor.

It really got me hooked into this whole idea of research, and pursuing math and science in general, she said. I loved the idea of trying to answer some question and solve some problem and be creative about different ways of proposing those solutions. I was especially interested in doing research that would benefit society.

Now, shes working to pay it forward by launching a program designed to introduce high school students to artificial intelligence and academic research. Bondi has organized Try AI, an outreach event that will be held in conjunction with the Association for the Advancement of Artificial Intelligence conference in New York City on February 8.

The half-day event with a focus on AI for good pairs local high school students with leading AI researchers. Mentors and mentees will be matched based on their areas of interests and spend a few hours working together to brainstorm potential solutions for a challenge faced by society. Groups will then present their proposed solutions.

The event will conclude with a panel of professors and graduate students who discuss topics like applying for college, career choices, and finding research opportunities as an undergraduate.

I think it is really important for students to get an idea of what people do if they go down this path, Bondi said. It is great to encourage people to consider STEM, but I think if you dont see the career path at the end of the school tunnel, it is hard to stay motivated or even make that career choice in the first place. Hopefully, by bringing everyone together, we can create communities that will support students as they go through this.

Support from mentors has played a vital role in Bondis academic journey.

The connections she made during that summer program inspired her to major in imaging science and return to the department where she had interned. She continued to conduct research as an undergraduate, focusing on historical document imaging to preserve valuable records for the future. Bondi also studied remote sensing and its applications in disaster recovery.

Her college mentors helped her pick graduate schools, and she wouldnt have landed at SEAS without their help.

Now in the lab of Milind Tambe, Gordon McKay Professor of Computer Science, Bondi focuses on applying artificial intelligence to aid wildlife conservation efforts.

Working with a group in South Africa, she is deploying AI to help park rangers detect people and animals that appear in footage recorded by conservation drones. Currently, rangers must watch hundreds of hours of thermal imaging footage to identify animals and poachers. AI can streamline the process by rapidly locating potential hot spots in the film, which could give rangers more time to intervene and stop poaching before it occurs.

But the computer vision software must overcome many of the same challenges faced by human eyes.

Seeing anything in those videos is difficult, especially humans and animals, because it is pretty much just a gray scale image and you only have one channel, versus what you would see with your cell phone camera, Bondi said. And it is looking for heat, so people and animals are brighter, but it turns out a lot of other things can be warm, as well, especially when it is warm outside. So you can get a lot of vegetation that looks like a person.

While the skills, knowledge, and intuition park rangers possess are essential for conservation efforts, AI can help rangers apply their limited resources most effectively, Bondi said.

She is also working to build a game theory model of poaching activities to help determine likely poaching hot spots, enabling officials to deploy conservation drones and rangers in areas that give them the best chance to prevent poaching.

The work is especially exciting for Bondi because it is uncharted territory for AI.

We are trying to make it easier for the people who are using these tools already, she said. AI can help these park rangers by simply sending an alert that says, maybe check this area out. Even if it is not 100 percent correct, it could help ease their burden.

For Bondi, the opportunity to make an impact in the world of wildlife conservation is gratifying. It hearkens back to the reasons she decided to pursue research in the first place, and builds off the work shes done developing Try AI.

And she still relies on mentors, during her daily research activities and when she considers her future career path.

Just the opportunity to continue to do this kind of mentorship is something that I am very passionate about, and it draws me towards academia, she said. Thinking about having students that I can help guide toward their next phase of life is very exciting.

Read the rest here:
A real introduction to artificial intelligence - Harvard School of Engineering and Applied Sciences

Artificial Intelligence in Agriculture Market Size Worth $2.9 Billion by 2025 | CAGR: 25.4%: Grand View Research, Inc. – PRNewswire

SAN FRANCISCO, Jan. 8, 2020 /PRNewswire/ -- The global artificial intelligence in agriculture marketsize is expected to reach USD 2.9 billion by 2025, according to a new report by Grand View Research, Inc. The market is anticipated to register a CAGR of 25.4% from 2019 to 2025. Artificial intelligence solutions in the agricultural industry are emerging in various forms, such as soil and crop monitoring, agricultural robots, and predictive analytics. Farmers and agribusiness corporations are increasingly using soil sampling and artificial intelligence -enabled sensors for data gathering for better analysis and processing. The availability of these processed data has paved the way for the deployment of artificial intelligence in agriculture and farming.

Key suggestions from the report:

Read 100 page research report with ToC on "Artificial Intelligence in Agriculture Market Size, Share & Trends Analysis Report By Component (Software, Hardware), By Technology, By Application (Precision Farming, Drone Analytics), By Region, And Segment Forecasts, 2019 - 2025" at: https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-in-agriculture-market

Rapidly increasing global population is one of the key factors driving the need for artificial intelligence in agriculture. The global population is expected to reach 9.8 billion by 2050, according to the UN. Subsequently, food production must increase significantly as well. Artificial intelligence enables efficient and potential farming techniques for increased crop productivity and yield. For instance, the artificial intelligence Sowing App developed by Microsoft sends sowing advisories on the optimal date for crop sowing to farmers. It enhances the farmers' efficiency in terms of planting and forecasting weather conditions.

The Asia Pacific market is expected to witness substantial growth over the forecast period, owing to increasing adoption of artificial intelligence -enabled solutions and services by agriculture-technology-based companies in emerging economies. Emerging economies such as India and China have started implementing artificial intelligence technologies such as machine learning and computer vision to increase crop yield. Favorable regulations and standards in these countries encourage the implementation of modern techniques in farming and agriculture. For instance, in July 2019, the government of India began the use of artificial intelligence for yield estimation and crop cutting to cut down the cost of farming and increase productivity.

Grand View Research has segmented the global artificial intelligence in agriculture market based on component, technology, application, and region:

Find more research reports on Next Generation Technologies Industry, by Grand View Research:

Gain access to Grand View Compass, our BI enabled intuitive market research database of 10,000+ reports

About Grand View Research

Grand View Research, U.S.-based market research and consulting company, provides syndicated as well as customized research reports and consulting services. Registered in California and headquartered in San Francisco, the company comprises over 425 analysts and consultants, adding more than 1200 market research reports to its vast database each year. These reports offer in-depth analysis on 46 industries across 25 major countries worldwide. With the help of an interactive market intelligence platform, Grand View Research helps Fortune 500 companies and renowned academic institutes understand the global and regional business environment and gauge the opportunities that lie ahead.

Contact:

Sherry James Corporate Sales Specialist, USA Grand View Research, Inc. Phone: +1-415-349-0058 Toll Free: 1-888-202-9519 Email: sales@grandviewresearch.comWeb: https://www.grandviewresearch.comFollow Us: LinkedIn| Twitter

SOURCE Grand View Research, Inc.

Read the rest here:
Artificial Intelligence in Agriculture Market Size Worth $2.9 Billion by 2025 | CAGR: 25.4%: Grand View Research, Inc. - PRNewswire

Delta Develops Artificial Intelligence Tool to Address Weather Disruption, Improve Flight Operations – Aviation Today

Delta Air Lines plans an initial launch of a new artificial intelligence machine learning tool for spring 2020. Photo: Delta Air Lines

Delta Air Lines CEO Ed Bastian used his keynote speech at the annual Consumer Electronics Show to discuss a new 2020s operational structure for the international carrier that will be driven by the use of a new artificial intelligence (AI) machine learning tool.

Under development at Deltas operations and customer center, Bastian did not provide a specific product name for the technology, but instead called it a proprietary tool that will mainly be focused on helping passengers and flight crews overcome weather occurrences that impact the routes they fly on a daily basis. The keynote speech is a familiar strategy across all of the divisions of Delta, including their maintenance team whose predictive maintenance leadership gave a speech on how the airline is shifting towards the adoption of AI at the 2019 AEEC/AMC annual conference.

Weve cancelled cancellations, but we still have to deal with weather variables like hurricanes or a nasty Noreaster, and thats why the team in our operations and customer center is developing the industrys first machine learning platform to help ensure a smooth operation even in extreme conditions. The system uses operational data to run scenarios and project future outcomes while simulating all the variables of running a global airline with more than 1,000 planes in the sky, Bastian said.

Initial launch of Deltas use of the new tool is scheduled for the spring, with the airline describing it as being capable of creating hypothetical outcomes for decision-making that occurs in anticipation of large-scale disruptions caused by weather or other environmental factors beyond their control. A key aspect of the tool is its ability to use machine learning, and learn from the impact of weather disruption so that airline personnel can make better decisions when the same situation occurs in the future.

Neither Bastian or the airlines media team provided major specific details about what types of specific algorithms their new tool will employ or what types of onboard aircraft systems will help fuel them, however Bastians keynote and Deltas development activity in recent years helps explain how theyre using artificial intelligence. The chief executives reference to cancelling cancellations can be traced back to Deltas improvement in avoiding maintenance cancellations, with the airline noting in an October 2019 press release that in 2010 it had more than 5,600 cancellations versus just 55 in 2018 through an internal shift to predictive maintenance.

In 2018 a multi-year agreement was signed between Delta and Airbus for the use of the Skywise Core Platform and Predictive Maintenance Application. Skywise is the data analytics platform Airbus uses to improve flight operational efficiency for airlines, enabled by the Collins Aerospace flight operations and maintenance exchanger (FOMAX) secure server router and compact connectivity unit that gathers aircraft maintenance and performance data and automatically sends it to an airlines maintenance control center engineers and technicians.

When an aircraft lands, FOMAX uses 4G antennas to transmit all of the performance data about every system on the aircraft to the Skywise analytics platform where it is analyzed and used to develop modeling to predict upcoming system failures. That process can help Delta understand when to replace certain parts before they fail and cause an aircraft on ground situation. In 2019, through November, Delta reported a completion factor of 99.8 percent, according to U.S. Department of Transportation data, including the lowest rate of aircraft maintenance-related cancels in its history.

Deltas pilots have also developed a flight weather viewer tablet application that gives them a three-dimensional view of their flight path with a prediction of where turbulence will occur. Initial use of turbulence avoidance first emerged for Delta in 2016, when their fiscal year annual report noted pilots were beta testing the use of algorithms developed by the National Center for Atmospheric Research.

Bastian said the airline is still improving on its use of turbulence avoidance and that it will be a major focus of their overall AI and machine learning driven operational structure moving forward.

Another focus we have is turbulence, were seeing more and more instances of it, and it has a very real impact on our customers and on our employees. We have been able to reduce the impact of turbulence with a flight weather viewer, which is an app developed by our very own Delta pilots. It visualizes turbulence and other weather hazards along the flight path. Using it, pilots can adjust their course more precisely, Bastian said. It also helps our pilots give real time updates to travelers while theyre in the air in advance of encountering turbulence and can also let them know how long we expect it to last.

A key aspect of the new AI tool is that it is proprietary, using Deltas internally held historical operational data to simulate the outcomes of certain environmental impacts on flight operations in real time.Erik Snell, senior vice president of Deltas operations and customer center, said in a statement that the carrier is adding a machine learning platform to our array of behind-the-scenes tools so that the more than 80,000 people of Delta can even more quickly and effectively solve problems even in the most challenging situations.

Among the variables considered by the machine learning algorithm of the new tool range from aircraft placement and crew restrictions to geopolitical constraints. Their smartphone mobile application will also play a major role in their adoption of AI, according to Snell.

As the Fly Delta app transforms into a day-of-travel digital concierge, we expect our quicker game-time decisions to play an even greater role in providing a more stress-free travel experience for our customers, Snell said.

Follow this link:
Delta Develops Artificial Intelligence Tool to Address Weather Disruption, Improve Flight Operations - Aviation Today

Artificial Intelligence Offers Companies a New Way to Fake Diversity – Jezebel

Most companies will do anything to promote diversity short of implementing the systemic changes required to become diverse. And for brands looking to appear diverse without doing any of the pesky work of becoming diverse, AI-generated images offer all the appearances of including POC and women without the headache of including living humans in businesses.

According to the Washington Post, machine-generated compilations of human faces are coming to a brochure near you thanks to newer, cheaper AI technology that uses thousands of photos of human faces in order to create convincing mock-ups. These images are then available for sale to anyone who needs a human-esque shape to create advertising content, diverse-looking brochures, or a fake Facebook profile to convince your Aunt Mary that Russia is paying Elizabeth Warren to hide Hillarys servers in the basement of a pizza restaurant in Washington D.C.

One Argentinian AI startup called Icons8 sells a subscription package for fake images that employs filters that offer photos ranging from infant to elderly, and offers ethnicity options such as White, Latino, Asian and Black as well as emotions from joy to despair.

The attempts to fake instead of make diversity have already begun. In June 2019, GQ ran a photo of a bunch of tech dudes in an Italian villa to which a woman (who is an actual living CEO) had dutifully been added for the sake of appearances.

Aside from the questionable ethics of using fake images to sell products to actual peopleone AI image startup boasts a dating site as a clientis the source of these images. The technology cant just conjure up a human face from nowhere. Instead, many of these companies rely on models who werent told ahead of time what their photos would be used for and arent paid extra for the fact that bits and pieces of their faces are being used thousands of times for purposes they never consented to.

Perhaps the only good thing about the coming days in which humans will no longer be able to trust their eyes is that we are due for some terrifying new monsters:

But the systems are imperfect artists, untrained in the basics of human anatomy, and can only attempt to match the patterns of all the faces theyve processed before. Along the way, the AI creates an army of what [Ivan Braun, co-founder of Icons8] calls monsters: Nightmarish faces pocked with inhuman deformities and surreal mutations. Common examples include overly fingered hands, featureless faces and people with mouths for eyes.

Currently, the law has not caught up to the technology, and fakes are not required to have any watermarks to distinguish them from images of real people. Transparency is left up to companies discretion. Here is my proposal: for every passable human image, companies should be required to have one monster. Then, that fake diversity pamphlet becomes fun for everyone. And imagine the exciting possibilities for Tinder matches. Under my system, at least the scary future has an accurate face.

Read more from the original source:
Artificial Intelligence Offers Companies a New Way to Fake Diversity - Jezebel

Robots, artificial intelligence highlight CES expo – Times of India

08 Jan 2020, 01:32PM ISTSource: AP

It is the time of year again when cute little robots become global superstars. At the CES unveil event these little robots, called Lovots, are charming the crowd, and the hundreds of gathered journalists. They are here for the first time as a real product that is about to hit the market in Japan. Previous versions that have been shown have been prototypes. The Lovot has a camera with facial recognition software, so when you walk in to a room it knows who you are. The updated Lovot can also learn how your home has been set up, for example where the front door is, so it can rush to you when you come home. Robots that act as friends, or as pets, instead of just performing repetitive tasks are becoming increasingly mainstream. Several other robots were shown at the Unveiled events, like the Mobile Arm Robot system from Industrial Technology Research Institute that can see the objects it needs to move. Most robots now also come with some sort of Artificial Intelligence function, like the ability to answer questions and hold a basic conversation. But AI can also be found in other devices, from voice assistants to smart city solutions like traffic management.

Read the original here:
Robots, artificial intelligence highlight CES expo - Times of India

AI Stocks: The Real Winner of the Artificial Intelligence Race – Investorplace.com

Many companies are vying to dominate the artificial intelligence (AI) space. The market is worth billions, and its only going to keep getting bigger. According to Grand View Research, by 2025 the global AI market is estimated to hit a stunning $390.9 billion! So, its no surprise that tech companies want a piece of the pie, and as big a piece as they can get.

Source: Shutterstock

Not surprisingly, two leaders of the AI sector are ones youre already probably well-acquainted with: Google (NASDAQ:GOOGL) and Microsoft (NASDAQ:MSFT). But neither are the winners. No, that title now goes to Baidu (NASDAQ:BIDU), known as the Google of China, which is beating its competitors in a big way.

On December 11, Baidus AI machine, Ernie, received top marks and broke records during the General Language Understanding Evaluation (GLUE) test. Put simply, this test determines how well an AI machine can understand human language. Ernie scored a 90.1 out of 100 the first A.I. system to score above 90 and beat out Microsofts score of 89.9 and Googles score of 89.7.

Since the results were released, the stock has made a nice move higher running up as much as 21%. Given the positive news and positive trading action, does this make Baidu a good AI investment?

MyPortfolio Gradersays no. In fact, it gives BIDU a solid F for its Total Grade, making this stock a Strong Sell.

It receives poor marks for its Sales Growth, Operating Margins Growth, Earnings Growth and Earnings Momentum. And even worse, it receives an F-rating for its Quantitative Grade. So, even though the stock has moved higher over the past month, theres been no significant increase in buying pressure. This tells us that the smart money is still staying far, far away.

So how do you play this growing AI trend? Well, its not with Microsoft or Google, either. Yes, they rate higher in Portfolio Grader Microsoft receives an A-rating and Google holds a C-rating but the real money isnt going to be made there. Its going to be made with the company that providesthe A.I. technology forallof them.

I call this theAI Master Key.

It is the company that makes the brain that all AI software needs to function, spot patterns, and interpret data.

Its known as the Volta Chip and its what makes the AI revolution possible.

Some of the biggest players in elite investing circles have large stakes in theAI Master Key:

Ron Baron, billionaire money manager with one of the biggest estates in the Hamptons.

Ken Fisher, author ofThe Ten Roads to Richesand other bestsellers, whos made theForbes400 Richest Americans list.

Mario Gabelli, namesake of the Gabelli Funds, with a salary of $85 million for one year Wall Streets highest paid CEO.

And some of the biggest companies are also its customers, including Google, Microsoft,Amazon (NASDAQ:AMZN), Baidu, Facebook (NASDAQ:FB), Tesla (NASDAQ:TSLA) and Alibaba(NYSE:BABA).

So it doesnt really matter which competitor wins the AI race, because this companys technology is used by all of them; therefore, its investors will profit off of all the AI success.

Ill tell you everything you need to know, as well as my buy recommendation, in my special report forGrowth Investor, The AI Master Key. The stock is currently sitting pretty with a 40% return on myGrowth InvestorBuy List, but it still under my buy limit price so youll want to sign up now; that way, you can get in while you can still do so cheaply.

Click here for a free briefing on this groundbreaking innovation.

Louis Navellier had an unconventional start, as a grad student who accidentally built a market-beating stock system with returns rivaling even Warren Buffett. In his latest feat, Louis discovered the Master Key to profiting from the biggest tech revolution of this (or any) generation. Louis Navellier may hold some of the aforementioned securities in one or more of his newsletters.

Continued here:
AI Stocks: The Real Winner of the Artificial Intelligence Race - Investorplace.com

LeaVoice an Artificial Intelligence made to reduce your stress and anxiety will showcase at CES2020 – PRUnderground

HoloAsh is developing an AI friend to reduce your stress. Yoshua Kishi, an entrepreneur diagnosed with ADHD, founded the startup to create a safe space for people who dont fit in a society organized for the average.

Recently, HoloAsh got accepted into 500Startups in Kobe and will be launching its new product, LeaVoice, at the upcoming CES2020 event. The showcase will be at CES2020 Las Vegas Convention Center between Jan 7 Jan 10. For more information, please visit: https://bit.ly/leavoicecom

About LeaVoice:LeaVoice is a voice chat with an AI friend for stressed and anxious people. If youre in a bad mood and need someone to talk to, your AI friend is always here. LeaVoice is judgment-free, and available 24/7.Sometimes, you feel like no one cares about you, that you dont fit in. Many social services provide basic communication but wont relieve you of these feelings. Chatbots are one-dimensional. But LeaVoice is listening to you 24/7, analyzing your voice to detect emotions that a text chat cannot convey. said Yoshua Kishi, CEO of HoloAsh.

About HoloAshHoloAsh was founded in 2018, creating an AI friend for people with stress and anxiety. Founded by Serial entrepreneur with Former Amazon Data Scientist, NLG specialist, and former Stanford Researcher. Unlike current chatbot, our AI friend will detect emotion from the sound of someones voice to provide the best possible response. Weve got accepted into 500 startups in Kobe and Plug and Play Kyoto as well.

Additional Information:Website: https://holoash.com/Press Kit download: https://brand.sparkamplify.com/holoashFacebook: https://www.facebook.com/holoash/Twitter: https://twitter.com/LeaVoice_AI

Media contact: Yoshua Kishi, Marysia RomaszkanEmail: yoshua@holoash.com, marysia@holoash.com

Disclaimer: This product is not intended to diagnose, treat, or prevent any disease. The information on this website or in emails is designed for educational purposes only. It is not intended to be a substitute for informed medical advice or care. Please consult a doctor with any questions or concerns you may have regarding your medical health. The news site hosting this press release is not associated with LeaVoice or HoloAsh. It is merely publishing a press release announcement submitted by a company, without any stated or implied endorsement of the person, product or service.

About SparkAmplify Distribution

SparkAmplify was founded in 2016. It is a SaaS company based in California and Taipei, specializing in media outreach and influencer engagement. The team consists of a group of passionate data scientists, engineers, designers and marketers looking to reshape digital marketing via machine learning and influencer social network analysis. SparkAmplify was selected as one of the Top 50 startups among 6,000+ startups from 80 countries at the 2017 Startup Grind Global Conference and a Top 100 startup at the Echelon 2019 Asia Summit.

Read more:
LeaVoice an Artificial Intelligence made to reduce your stress and anxiety will showcase at CES2020 - PRUnderground

Facebook to ban deepfake videos created with artificial intelligence technology – TVNZ

Facebook says it is banning deepfake videos, the false but realistic clips created with artificial intelligence and sophisticated tools, as it steps up efforts to fight online manipulation. But the policy leaves plenty of loopholes.

Facebook logo Source: Associated Press

The social network said yesterday that it's beefing up its policies for removing videos edited or synthesised in ways that aren't apparent to the average person, and which could dupe someone into thinking the video's subject said something he or she didn't actually say.

Created by artificial intelligence or machine learning, deepfakes combine or replace content to create images that can be almost impossible to tell are not authentic.

While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases, Facebook's vice president of global policy management, Monika Bickert, said in a blog post.

However, she said the new rules won't include parody or satire, or clips edited just to change the order of words. The exceptions underscore the balancing act Facebook and other social media services face in their struggle to stop the spread of online misinformation and "fake news," while also respecting free speech and fending off allegations of censorship.

The US tech company has been grappling with how to handle the rise of deepfakes after facing criticism last year for refusing to remove a doctored video of House Speaker Nancy Pelosi slurring her words, which was viewed more than 3 million times. Experts said the crudely edited clip was more of a cheap fake than a deepfake.

Then, a pair of artists posted fake footage of Facebook CEO Mark Zuckerberg showing him gloating over his one-man domination of the world. Facebook also left that clip online. The company said at the time that neither video violated its policies.

The problem of altered videos is taking on increasing urgency as experts and lawmakers try to figure out how to prevent deepfakes from being used to interfere with the U.S. presidential election in November.

Your playlist will load after this ad

The technology is called "deepfake" and it's already causing major problems overseas with experts fearing itll do the same here. Source: 1 NEWS

The new policy is a strong starting point," but doesn't address broader problems, said Sam Gregory, program director at Witness, a nonprofit working on using video technology for human rights.

The reality is there arent that many political deepfakes at the moment," he said. "They're mainly nonconsensual sexual images.

The bigger problem is videos that are either shown without context or lightly edited, which some have dubbed shallow fakes, Gregory said.

These include the Pelosi clip or one that made the rounds last week of Democratic presidential candidate Joe Biden that was selectively edited to make it appear he made racist remarks.

Gregory, whose group was among those that gave feedback to Facebook for the policy, said that while the new rules look strong on paper, there are questions around how effective the company will be at uncovering synthetic videos.

Facebook has built deepfake-detecting algorithms and can also look at an account's behavior to get an idea of whether it's intention is to spread disinformation. That will give the company an edge over users or journalists in sniffing them out, Gregory said.

But those algorithms haven't been used widely for deepfakes in the wild. "So it is an open question how effective detection will be," he said. This is an algorithmic kind of game of cat and mouse, where the forgeries will get better alongside the detection.

Facebook said any videos, deepfake or not, will also be removed if they violate existing standards for nudity, graphic violence or hate speech.

Those that aren't removed can still be reviewed by independent third-party fact-checkers and any deemed false will be flagged as such to people trying to share or view them, which Bickert said was a better approach than just taking them down.

If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem, Bickert said. By leaving them up and labeling them as false, were providing people with important information and context.

Twitter, which has been another hotbed for misinformation and altered videos, said it's in the process of creating a policy for synthetic and manipulated media, which would include deepfakes and other doctored videos. The company has asked for public feedback on the issue.

The responses it's considering include putting a notice next to tweets that include manipulated material. The tweets might also be removed if they're misleading and could cause serious harm to someone.

YouTube, meanwhile, has a policy against deceptive practices that the company says includes the deceptive uses of manipulated media" that may pose serious risk of harm. For instance, the company removed the Pelosi video last year. Google, which owns YouTube, is also researching how to better detect deepfakes and other manipulated media.

More:
Facebook to ban deepfake videos created with artificial intelligence technology - TVNZ