Page 145«..1020..144145146147..150160..»

Category Archives: Ai

Global Medical Imaging Informatics Market Accelerated by Cloud and AI to Enable Deployment Options and Support Decision-making – PRNewswire

Posted: March 11, 2021 at 12:10 pm

For further information on this analysis, Digital Transformation in Imaging Powering the Next Wave of Growth in Informatics, please visit: http://frost.ly/5ch

"Medical imaging informatics is poised to play a central role in the intervention and management of illnesses. Digitization in imaging offers several advantages, including higher pixel information, efficient storage and retrieval, and ease in sharing images between the care team members," said Suresh Kuppuswamy, Healthcare & Life Sciences Industry Principal at Frost & Sullivan. "Radiology IT is forecast to maintain its position as the largest revenue contributor, driven by the adoption of radiology PACS in emerging countries, as most of them are projected to still implement the PACS at the modality or departmental level.

Kuppuswamy added: "From a regional market viewpoint, the North American market will largely drive the enterprise imaging market, underscored by the need for clinical decision support systems and image exchange solutions. Europe, the Middle East, and Africa (EMEA) are expected to witness growth in ancillary and enterprise imaging segments. Similarly, China, Australia, Korea, and Japan are forecast to be the major economies spurring Asia-Pacific's revenue growth. Continuous healthcare infrastructure improvements in Southeast Asia and India also provide additional growth opportunities for vendors."

To tap growth prospects in the medical imaging and informatics market, vendors need to focus on the following:

Digital Transformation in Imaging Powering the Next Wave of Growth in Informaticsis the latest addition to Frost & Sullivan's Healthcare & Life Sciences research and analyses available through the Frost & Sullivan Leadership Council, which helps organizations identify a continuous flow of growth opportunities to succeed in an unpredictable future.

About Frost & Sullivan

For six decades, Frost & Sullivan has been world-renowned for its role in helping investors, corporate leaders and governments navigate economic changes and identify disruptive technologies, Mega Trends, new business models and companies to action, resulting in a continuous flow of growth opportunities to drive future success.Contact us: Start the discussion.

Digital Transformation in Imaging Powering the Next Wave of Growth in Informatics

MEF2

Contact:Mariana FernandezCorporate CommunicationsP: +1 210 348 10 12E: [emailprotected]http://ww2.frost.com

SOURCE Frost & Sullivan

https://ww2.frost.com/

See more here:

Global Medical Imaging Informatics Market Accelerated by Cloud and AI to Enable Deployment Options and Support Decision-making - PRNewswire

Posted in Ai | Comments Off on Global Medical Imaging Informatics Market Accelerated by Cloud and AI to Enable Deployment Options and Support Decision-making – PRNewswire

Pachama Recognized as the World’s Most Innovative AI Company of 2021 in Fast Company’s Annual List – GlobeNewswire

Posted: at 12:10 pm

San Francisco, California, March 11, 2021 (GLOBE NEWSWIRE) -- Pachama has been named to Fast Companys prestigious annual list of the Worlds Most Innovative Companies for 2021.

The list honors the businesses that have not only found a way to be resilient in the past year, but also turned those challenges into impact-making processes. These companies did more than survive, they thrivedmaking an impact on their industries and culture as a whole.

Pachama has earned the top standing in the Artificial Intelligence category for its use of AI to accurately measure carbon sequestration in reforestation and conservation projects around the world. A process that has enabled Pachama to bring clarity to which initiatives are making a real difference in the mission of solving climate change.

Being recognized as one of the worlds most innovative companies, alongside highly respected entrepreneurs and leading innovators, is an honor for Pachama. We are delighted that our technology is gaining global attention as a way to bring new levels of trust to forest carbon markets. In the past, many companies had considered investing in forest restoration and conservation but didnt do it because of concerns about how they were going to ensure that the commitments from these projects were going to be reported. By harnessing AI, LiDAR and satellite imagery, we are contributing to a credible forest carbon market, founded on increased accountability, accuracy and transparency. Our technology will enable a new standard of assurance in carbon markets, and mean that more and more individuals and organizations can invest with confidence to achieve climate goals, all while supporting our global forests.

In a year of unprecedented challenges, the companies on this list exhibit fearlessness, ingenuity, and creativity in the face of crisis, said Fast Company Deputy Editor David Lidsky, who oversaw the issue with Senior Editor Amy Farley.

Large organizations such as Microsoft and Shopify are among Pachamas customers, utilizing Pachamas verified forest carbon projects to achieve net zero ambitions. This week Pachama also announced its first endeavor into project origination as it was selected as a key strategic partner by Mercado Libre, Latin Americas largest ecommerce and fintech company, to kickstart forest restoration projects in Brazil and verify and monitor carbon impact. This partnership marks a significant step for the company as Pachama expands its capabilities to help new reforestation projects get off the ground powered by advanced technology.

Nature-based solutions are a critical tool in the climate toolkit, with more projects needed to remove climate relevant amounts of CO2 from the atmosphere. Pachamas technologies are key to scaling these efforts and catalysing project growth.

Fast Companys Most Innovative Companies issue (March/April 2021) is now available online here, as well as in app form via iTunes and on newsstands beginning March 16, 2021. The hashtag is #FCMostInnovative.

Notes to Editors:

Fast Companys editors and writers sought out the most groundbreaking businesses across the globe and industries. They also judged nominations received through their application process.

The Worlds Most Innovative Companies is Fast Companys signature franchise and one of its most highly anticipated editorial efforts of the year. It provides both a snapshot and a road map for the future of innovation across the most dynamic sectors of the economy.

About PachamaPachama is a mission-driven company looking to restore nature to help address climate change. Pachama brings the latest technology in remote sensing and AI to the world of forest carbon in order to enable forest conservation and restoration to scale. Pachamas core technology harnesses satellite imaging with artificial intelligence to measure carbon captured in forests. Through the Pachama marketplace, responsible companies and individuals can connect with carbon credits from projects that are protecting and restoring forests worldwide. Pachama was founded in 2018 by Diego Saez-Gil and Tomas Aftalion, two technology entrepreneurs originally from Argentina, now based in Silicon Valley, California. The company is backed by some of the top venture capital funds focused on climate tech including Breakthrough Energy Ventures, Amazon Climate Fund, LowerCarbon Capital, Saltwater and Y Combinator.

About Fast CompanyFast Company is the only media brand fully dedicated to the vital intersection of business, innovation, and design, engaging the most influential leaders, companies, and thinkers on the future of business. The editor-in-chief is Stephanie Mehta. Headquartered in New York City, Fast Company is published by Mansueto Ventures LLC, along with our sister publication Inc., and can be found online at http://www.fastcompany.com.

Go here to see the original:

Pachama Recognized as the World's Most Innovative AI Company of 2021 in Fast Company's Annual List - GlobeNewswire

Posted in Ai | Comments Off on Pachama Recognized as the World’s Most Innovative AI Company of 2021 in Fast Company’s Annual List – GlobeNewswire

‘Typographic attack’: pen and paper fool AI into thinking apple is an iPod – The Guardian

Posted: at 12:10 pm

As artificial intelligence systems go, it is pretty smart: show Clip a picture of an apple and it can recognise that it is looking at a fruit. It can even tell you which one, and sometimes go as far as differentiating between varieties.

But even cleverest AI can be fooled with the simplest of hacks. If you write out the word iPod on a sticky label and paste it over the apple, Clip does something odd: it decides, with near certainty, that it is looking at a mid-00s piece of consumer electronics. In another test, pasting dollar signs over a picture of a dog caused it to be recognised as a piggy bank.

OpenAI, the machine learning research organisation that created Clip, calls this weakness a typographic attack. We believe attacks such as those described above are far from simply an academic concern, the organisation said in a paper published this week. By exploiting the models ability to read text robustly, we find that even photographs of handwritten text can often fool the model. This attack works in the wild but it requires no more technology than pen and paper.

Like GPT-3, the last AI system made by the lab to hit the front pages, Clip is more a proof of concept than a commercial product. But both have made huge advances in what was thought possible in their domains: GPT-3 famously wrote a Guardian comment piece last year, while Clip has shown an ability to recognise the real world better than almost all similar approaches.

While the labs latest discovery raises the prospect of fooling AI systems with nothing more complex than a T-shirt, OpenAI says the weakness is a reflection of some underlying strengths of its image recognition system. Unlike older AIs, Clip is capable of thinking about objects not just on a visual level, but also in a more conceptual way. That means, for instance, that it can understand that a photo of Spider-man, a stylised drawing of the superhero, or even the word spider all refer to the same basic thing but also that it can sometimes fail to recognise the important differences between those categories.

We discover that the highest layers of Clip organise images as a loose semantic collection of ideas, OpenAI says, providing a simple explanation for both the models versatility and the representations compactness. In other words, just like how human brains are thought to work, the AI thinks about the world in terms of ideas and concepts, rather than purely visual structures.

But that shorthand can also lead to problems, of which typographic attacks are just the top level. The Spider-man neuron in the neural network can be shown to respond to the collection of ideas relating to Spider-man and spiders, for instance; but other parts of the network group together concepts that may be better separated out.

We have observed, for example, a Middle East neuron with an association with terrorism, OpenAI writes, and an immigration neuron that responds to Latin America. We have even found a neuron that fires for both dark-skinned people and gorillas, mirroring earlier photo tagging incidents in other models we consider unacceptable.

As far back as 2015, Google had to apologise for automatically tagging images of black people as gorillas. In 2018, it emerged the search engine had never actually solved the underlying issues with its AI that had led to that error: instead, it had simply manually intervened to prevent it ever tagging anything as a gorilla, no matter how accurate, or not, the tag was.

See the original post:

'Typographic attack': pen and paper fool AI into thinking apple is an iPod - The Guardian

Posted in Ai | Comments Off on ‘Typographic attack’: pen and paper fool AI into thinking apple is an iPod – The Guardian

How Facebook got addicted to spreading misinformation – MIT Technology Review

Posted: at 12:10 pm

Within a year his team had developed these models, as well as the tools for designing and deploying new ones faster. Before, it had taken Quioneros engineers six to eight weeks to build, train, and test a new model. Now it took only one.

News of the success spread quickly. The team that worked on determining which posts individual Facebook users would see on their personal news feeds wanted to apply the same techniques. Just as algorithms could be trained to predict who would click what ad, they could also be trained to predict who would like or share what post, and then give those posts more prominence. If the model determined that a person really liked dogs, for instance, friends posts about dogs would appear higher up on that users news feed.

Quioneros success with the news feedcoupled with impressive new AI research being conducted outside the companycaught the attention of Zuckerberg and Schroepfer. Facebook now had just over 1 billion users, making it more than eight times larger than any other social network, but they wanted to know how to continue that growth. The executives decided to invest heavily in AI, internet connectivity, and virtual reality.

They created two AI teams. One was FAIR, a fundamental research lab that would advance the technologys state-of-the-art capabilities. The other, Applied Machine Learning (AML), would integrate those capabilities into Facebooks products and services. In December 2013, after months of courting and persuasion, the executives recruited Yann LeCun, one of the biggest names in the field, to lead FAIR. Three months later, Quionero was promoted again, this time to lead AML. (It was later renamed FAIAR, pronounced fire.)

Thats how you know whats on his mind. I was always, for a couple of years, a few steps from Mark's desk.

In his new role, Quionero built a new model-development platform for anyone at Facebook to access. Called FBLearner Flow, it allowed engineers with little AI experience to train and deploy machine-learning models within days. By mid-2016, it was in use by more than a quarter of Facebooks engineering team and had already been used to train over a million models, including models for image recognition, ad targeting, and content moderation.

Zuckerbergs obsession with getting the whole world to use Facebook had found a powerful new weapon. Teams had previously used design tactics, like experimenting with the content and frequency of notifications, to try to hook users more effectively. Their goal, among other things, was to increase a metric called L6/7, the fraction of people who logged in to Facebook six of the previous seven days. L6/7 is just one of myriad ways in which Facebook has measured engagementthe propensity of people to use its platform in any way, whether its by posting things, commenting on them, liking or sharing them, or just looking at them. Now every user interaction once analyzed by engineers was being analyzed by algorithms. Those algorithms were creating much faster, more personalized feedback loops for tweaking and tailoring each users news feed to keep nudging up engagement numbers.

Zuckerberg, who sat in the center of Building 20, the main office at the Menlo Park headquarters, placed the new FAIR and AML teams beside him. Many of the original AI hires were so close that his desk and theirs were practically touching. It was the inner sanctum, says a former leader in the AI org (the branch of Facebook that contains all its AI teams), who recalls the CEO shuffling people in and out of his vicinity as they gained or lost his favor. Thats how you know whats on his mind, says Quionero. I was always, for a couple of years, a few steps from Mark's desk.

Here is the original post:

How Facebook got addicted to spreading misinformation - MIT Technology Review

Posted in Ai | Comments Off on How Facebook got addicted to spreading misinformation – MIT Technology Review

The Achilles heel of AI might be its big carbon footprint – Mint

Posted: at 12:10 pm

A few months ago, Generative Pre-Trained Transformer-3, or GPT-3, the biggest artificial intelligence (AI) model in history and the most powerful language model ever, was launched with much fanfare by OpenAI, a San Francisco-based AI lab. Over the last few years, one of the biggest trends in natural language processing (NLP) has been the increasing size of language models (LMs), as measured by the size of training data and the number of parameters. The 2018-released BERT, which was then considered the best-in-class NLP model, was trained on a dataset of 3 billion words. The XLNet model that outperformed BERT was based on a training set of 32 billion words. Shortly thereafter, GPT-2 was trained on a dataset of 40 billion words. Dwarfing all these, GPT-3 was trained on a weighted dataset of roughly 500 billion words. GPT-2 had only 1.5 billion parameters, while GPT-3 has 175 billion.

A 2018 analysis led by Dario Amodei and Danny Hernandez of OpenAI revealed that the amount of compute used in various large AI training models had been doubling every 3.4 months since 2012, a wild deviation from the 24 months of Moores Law and accounting for a 300,000-fold increase. GPT-3 is just the latest embodiment of this exponential trajectory. In todays deep-learning centric paradigm, institutions around the world seem in competition to produce ever larger AI models with bigger datasets and greater computation power.

The influential paper, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? by Timnit Gebru and others, was one of the first to highlight the environmental cost of the ballooning size of training datasets. In a 2019 study, Energy and Policy Considerations for Deep Learning in NLP, Emma Strubell, Ananya Ganesh and Andrew McCallum of University of Massachusetts, Amherst estimated that while the average American generates 36,156 pounds of carbon dioxide emissions in a year, training a single deep-learning model can generate up to 626,155 pounds of emissionsroughly equal to the carbon footprint of 125 round-trip flights between New York and Beijing.

Neural networks carry out a lengthy set of mathematical operations for each piece of training data. Larger datasets therefore translate to soaring computing and energy requirements. Another factor driving AIs massive energy draw is the extensive experimentation and tuning required to develop a model. Machine learning today remains largely an exercise in trial and error. Deploying AI models in real-world settingsa process known as inferenceconsumes even more energy than training does. It is estimated that 80-90% of the cost of a neural network is on inference rather than training.

Payal Dhar in her Nature Machine Intelligence article, The Carbon Impact of Artificial Intelligence, captures the irony of this situation. On one hand, AI can surely help reduce the effects of our climate crisis: By way of smart grid designs, for example, and by developing low-emission infrastructure and modelling climate-change predictions. On the other hand, AI is itself a significant emitter of carbon. How can green AI, or AI that yields novel results without increasing computational cost (and ideally reducing it), be developed?

No doubt, industry and academia have to promote research of more computationally efficient algorithms, as well as hardware that requires less energy. The software authors should report training time and computational resources used to develop a model. This will enable a direct comparison across models. But we need to have far more significant pointers to guide the future of AI. A strong contender for this role is the human brain.

Neuromorphic Computing is an emerging field in technology that understands the actual processes of our brain and uses this knowledge to make computers think and process inputs more like human minds do. For example, our brain executes its multi-pronged activities by using just 20 watts of energy. On the other hand, a supercomputer that is not as versatile as a human brain consumes more than 5 megawatts, which is 250,000 times more power than our brain does. Many challenges that AI is attempting to solve today have already been solved by our minds over 300-plus millennia of human evolution. Our brain is an excellent example of few-shot learning, even from very small datasets. By understanding brain functions, AI can use that knowledge as an inspiration or as validation. AI need not reinvent the wheel.

Computational neuroscience, a field of study in which mathematical tools and theories are used to investigate brain function at an individual neuron level, has given us lots of new knowledge on the human brain. According to V. Srinivasa Chakravarthy, author of Demystifying the Brain: A Computational Approach, This new field has helped unearth the fundamental principles of brain function. It has given us the right metaphor, a precise and appropriate mathematical language which can describe brains operation." The mathematical language of Computational Neuroscience makes it very palatable for AI practitioners.

AI has a significant role in building the world of tomorrow. But AI cannot afford to falter on its environment-friendly credentials. Go back to nature is the oft-repeated mantra for eco-friendly solutions. In similar vein, to build AI systems that leave a far smaller carbon footprint, one must go back to one of the most profound creations of naturethe human brain.

Biju Dominic is the chief evangelist, Fractal Analytics, and chairman, FinalMile Consulting

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Read the original:

The Achilles heel of AI might be its big carbon footprint - Mint

Posted in Ai | Comments Off on The Achilles heel of AI might be its big carbon footprint – Mint

The Impact of Artificial Intelligence on the IC – The Cipher Brief

Posted: at 12:10 pm

The Cipher Briefs Academic Incubator partners with national security-focused programs from colleges and Universities across the country share the work of the next generation of national security leaders.

Ian Fitzgerald is an M.A. student in International Security at George Mason University with research interests in Great Power Competition, Cyber Warfare, Emerging Technologies, Russia and China.

ACADEMIC INCUBATOR The explosion of data available to todays analysts creates a compelling need to integrate artificial intelligence (AI) into intelligence work. The objective of the Intelligence Community (IC) is to analyze, connect, apply context, infer meaning, and ultimately, make analytical judgments based on that data. The data explosion offers an incredible source of potential information, but it also creates issues for the IC.

Todays intelligence analysts find themselves working from an information-scarce environment to one with an information surplus. The pace that data is being generated by technical intelligence (TECHINT) collection or through open sources is growing at an exponential rate; web-based alone will be at 3.3 trillion gigabytes by 2021, according to the Office of the Director of National Intelligence. TECHINT disciplines are collecting data at a rate that far exceeds the ICs ability to process and exploit it into digestible information, while making it harder for analysts to find the information needed to create solid assessments.

AI is beginning to look like a way to help intelligence analysts overcome the challenges of information overload. AI allows analysts to classify data into meaningful information in ways more reliably and accurately than humans and allows for fusing massive amounts of data across large different data sets at scale and in real time. The technology offers anomaly detection by analyzing routine patterns of behavior and then identifying new behavior outside the norm. Such technology would give intelligence analysts the tools to identify connections between data, flag suspicious activity, spot trends and patterns, fuse disparate elements of data, map networks, and make statistical predictions about future behavior based on past history.

Offloading many of the ICs data-heavy processing and exploitation tasks onto machines including data cleaning, labeling or pattern recognition would free up much of the analysts time. An all-source analyst with the support of AI-enabled systems, could save as much as 364 hours or more than 45 working days a year. This allows analysts to direct their energy on the needs of the decision maker, draft their analytic judgements, and disseminate their finished assessments.

This benefit has already convinced the IC leadership of the value of AI as a tool to increase the effectiveness of their analysts. Dawn Meyerriecks, the CIAs Deputy Director for Science and Technology called AI a very powerful asset to the ICs scarcest resource: really good analysts. Meyerriecks reported in 2017, that the CIA had 137 pilot projects directly related to AI which include everything from automatically tagging objects in video to better predicting future events based on big data and correlational evidence.

The CIAs Open Source Center is using AI to comb through news articles from around the world, monitor trends, geopolitical developments and potential crises in real-time. The National Geospatial-Intelligence Agency sees AI as a way to automate certain kinds of image analysis to help free up analysts to perform higher-level work. At the National Security Agency, AI is being used to better understand and see patterns in the vast amount of signals intelligence data it collects, screening for anomalies in web traffic patterns or other data that could portend an attack. Finally, this past year, the Office of the Director of National Intelligence published a principles and ethics framework, laying out guidelines regarding the use and implementation of AI for the ICs mission.Leadership at the IC has gone on record saying that AI cannot and should not replace the role of the human analyst. Both Former NGA Director Robert Cardillo and former NSA Director Admiral (ret.) Michael Rogers said at the 2017 Intelligence and National Security Summit that as the bedrock of intelligence is credibility and trust, it requires keeping human analysts involved, and that the increased use of AI does not mean IC agencies should cease using human analysts.

An AI-enabled image recognition system could accurately identify an object as a missile but would struggle to tell a coherent story about what was happening. While AI may be able to reach a conclusion that is convincing, it is unable to show how it got the answer, unlike human analysts. In other words, it can only inform, not explain. At one of the CIAs initiatives, experts found that in many cases, the AI analytics that had the most accurate results were also the ones with the least ability to explain how it got the answer.

The IC must begin acceleration of AI innovation and adoption wide-scale in order to amplify its human analysts and stay ahead of Americas adversaries. First, AI should begin to be integrated with TECHINT agencies, such as the NSA and NGA, that are experiencing data surpluses and then with all-source agencies that depend on the availability of information to make their analytic judgments.

The first step is training the AI in order for it to perform the tasks the IC has in mind. Most machine learning methods require large, high-quality, tagged data sets and the amount and type of data used to train it has great significance. For the ICs purposes, the training data would likely require the use of classified information. Procedures will have to be set in place in order to protect that classified data as the AI is trained with it. AI training models are easily vulnerable to attacks on the training data where an adversary attempts to poison the system, something which AI researchers have yet to find a workable solution to protect against.

Second, there is the question of who does the training. The IC is lacking a significant degree of manpower in collecting and labeling data for machine learning purposes. While the same challenge is true for the private sector, the inherently secretive intelligence community cannot rely on crowdsourcing machine learning platforms. The IC would have to bring in or find trustworthy data scientists in the private sector who are approved to build and handle the classified training data. They would also need to take steps in mitigating the tendency for bias to sneak into the learning models, something noted in ODNIs ethics framework.

Lastly, the IC will have to stand up multi-layered cloud computing environments. Cloud technology is the only way to achieve the huge computing power needed to run AI tools at the scale of U.S. intelligence operations. This new environment would cost tens of billions to build on top of current infrastructures.

In a world of growing data surpluses, how fast the IC can make sense of the information it collects will inevitably affect U.S. national security. In order to stay ahead of U.S. adversaries, and assure policymakers of having decision advantage, the IC must begin to fundamentally change the way it performs its duties in processing. IC agencies have a strong appetite for information, but in order to keep up with it, the IC will have to begin integrating AI into its intelligence cycle.

Find out how your University or college national security program can join The Cipher Brief as an Academic Incubator partner.

Read more expert-driven national security insight, analysis and perspective in The Cipher Brief

More here:

The Impact of Artificial Intelligence on the IC - The Cipher Brief

Posted in Ai | Comments Off on The Impact of Artificial Intelligence on the IC – The Cipher Brief

Cowbell Cyber raises $20 million, aims to build out its AI-drive cyber insurance platform – ZDNet

Posted: at 12:09 pm

Cowbell Cyber aims to combine data science, monitoring, AI, and cyber insurance for SMEs.

Cowbell Cyber, an AI-driven cyber insurance provider for small and medium enterprises, said it raised $20 million in Series A funding to expand its underwriting ability.

The best cyber insurance

The cyber insurance industry is likely to go mainstream and is a simple cost of doing business. Here are a few options to consider.

Read More

The Cowbell Cyber funding comes a day after Corvus Insurance raised $100 million. The upshot here is that startups are looking to expand cyber insurance using data science against incumbent providers. The market for cyber insurance is likely to expand given that security incidents aren't exactly going away.

Also:What is cyber insurance? Everything you need to know|Best cyber insurers|Google Cloud, Allianz, Munich Re team up on cyber insurance program

Brewer Lane Ventures led the round for Cowbell Cyber with participation from Pivot Investment Partners, Avanta Ventures, and Markel Corporation. Cowbell Cyber said it will use the funding for product development, sales and marketing, and expanding its risk engineering.

Cowbell Cyber launched its Prime 250 program in September. Prime 250 enables insurance agents to issue personalized cyber policies in 38 states. Cowbell Cyber currently has a risk pool of 10 million continuously monitored organizations and a network of more than 4,500 agents and brokers.

On the data science front, Cowbell Cyber aims to automate data collection with its cloud platform, provide observability and monitoring and then combine it with risk scoring, actuarial science, and underwriting.

The company's portfolio includes cybersecurity awareness training, continuous risk assessment, and pre- and post-breach risk improvement services. Cowbell Cyber also has a free risk assessment service called Cowbell Factors, which adds a freemium element to selling cyber policies.

Read this article:

Cowbell Cyber raises $20 million, aims to build out its AI-drive cyber insurance platform - ZDNet

Posted in Ai | Comments Off on Cowbell Cyber raises $20 million, aims to build out its AI-drive cyber insurance platform – ZDNet

How AI and apps are protecting the livelihoods of small-scale fishermen – World Economic Forum

Posted: at 12:09 pm

From weather predicting apps to using artificial intelligence to monitor the fish they catch, small-scale fishermen and coastal communities are increasingly turning to digital tools to help them be more sustainable and tackle climate change.

Overfishing and illegal fishing by commercial vessels inflict significant damage on fisheries and the environment, and take food and jobs from millions of people in coastal communities who rely on fishing, environmental groups say.

In addition, climate change affects on small-scale fishermen - who account for about 90% of the world's capture fishermen and fish workers - include fish moving to new areas in search of cooler waters or if their habitat is destroyed, rising sea levels, and an increase in the number of storms.

"Small-scale fishers are already facing many challenges - from multiple marine uses, declining fish stocks, threats from over-fishing - and climate change is just going to exacerbate those challenges," said Alexis Rife, director of small-scale fisheries initiatives at EDF.

"That means that their livelihoods are at risk. It means that their food security is at risk ... it's a pretty dire situation," she told the Thomson Reuters Foundation.

The website has a resource library where fishermen can search for topics of interest, free online courses, a community forum, discussion groups, an events page and a blog section.

Although it requires a smart phone or computer and internet connection to access - which is often patchy in coastal areas - Rife said it had low data requirements and they are looking at ways to enable users to view its information offline.

The website's resources can be easily shared via WhatsApp, Facebook or Twitter - platforms already widely used by many small-scale fishers to help get the best prices.

EDF also has a pilot project in Indonesia's Lampung province on Sumatra island that uses an app to record and monitor catch in blue swimming crab fisheries to enable them to be more sustainable.

A separate pilot in Indonesia uses cameras with artificial intelligence (AI) and algorithms to monitor how many vessels are going out to sea and estimate their catch.

"Fishing is the backbone of coastal and inland fisheries communities around the globe, providing food and nutrition, supporting fishing-related jobs ... (and) helping alleviate poverty," said Simon Cripps, executive director for marine conservation at green group the Wildlife Conservation Society.

A diver harvesting geoduck in the Gulf of California, Mexico, in 2012.

Image: EDF

Since 2007, Taiwan has mandated that all small-scale fishermen use global positioning system (GPS) devices - that give a vessel's location every three minutes - with the data collected and analysed along with reports on fish catches, gear used, and auctions.

The data and monitoring gives insight into assessing fishery conditions, fisheries livelihoods and food security, and helps shape government policy.

The system was also used in 2016 to estimate loss of earnings and allocate reparations to fishermen after an oil spill.

"This year, the device has been rather helpful in assessing fishery conditions and for offshore wind power farms - trying to find a balance between the environmental protection, fishing ground, and power industries," said William Hsu, associate professor at the National Taiwan Ocean University, which helped with the project.

To alleviate privacy concerns, the government gave assurances that the data would be kept private unless ordered by a court and instigated fuel subsidies as an incentive for users.

In South Africa, the Abalobi app for small fisheries was launched about five years ago and enables users to log catches, record fish sales, capture daily expenses, find buyers and see the latest fishing regulations and notices.

Simon Funge-Smith, senior fishery officer at the United Nations' Food and Agriculture Organization (FAO) in the Asia-Pacific, said while many technologies can be useful for advocacy groups, fishers' groups and researchers, their benefits to small-scale fishers are limited.

Language, limited coverage of phone networks, and data requirements, can hold back many technologies, he said.

Language, and data requirements can hold back many technologies in remote communities.

Image: Reuters/Zohra Bensemra

Apps that track locations and fish catches using less time-consuming and simple entries, or help users comply with rules and laws, are more likely to succeed in empowering small-scale fishermen, he said.

Mobile phones and online banking apps have "transformed" fishing and "lubricated the entire trading arrangement of what is a very perishable product", Funge-Smith said.

The threat of data collected by digital tools being misused - like for taxation - is not huge, he said, adding that this would discourage its use or cause its misuse.

Ohi Masuda has been a geoduck and scallops fisherman for more than a decade near Baja California, Mexico, maintaining a family tradition that began when his ancestors came to Mexico from Japan in the 1950s.

Masuda has to cope with rising sea temperatures impacting the types of fish he can catch, and the need of cool water for the processing of fish before being shipped to Asia.

"It could help us to innovate," he said about the SSF Hub, while conceding that limited internet connection could hinder access for some fishermen.

"In Mexico, we often believe that we need to concentrate our efforts only on catching enough fish to sustain a fishery without investing in post-harvest processes, transportation, added value, management, or distribution."

Original post:

How AI and apps are protecting the livelihoods of small-scale fishermen - World Economic Forum

Posted in Ai | Comments Off on How AI and apps are protecting the livelihoods of small-scale fishermen – World Economic Forum

Lenovo Puts AI to Work on Production Planning | eWEEK – eWeek

Posted: at 12:09 pm

The prestige that technologies enjoy among global businesses and consumers tends to obscure the mundane efforts and processes required to bring products to market. That is especially true for mass-produced hardware, including PCs and smartphones, whose success depends on watchful vendors overseeing massive, extraordinarily complex manufacturing processes.

Effective planning, scheduling and performance are vital to hitting or missing production goals and satisfying paying customers and shareholders. That is what makes Lenovos new Smart Production Planning System particularly intriguing. Lenovos solution was recently named a finalist by theInstitute for Operations Research and the Management Sciences (INFORMS) for its Franz Edelman award for operational achievement. Lets consider it further.

To begin, what makes planning the manufacture of common products, such as laptop PCs, so difficult? Consider first what might be called customer variables. As demand and markets for tech products have evolved, vendors differentiate themselves by offering clients a wide variety of features and options to choose from.

In the case of laptops, those typically include separate brands and product families for specific use cases, such as 2-in-1s, consumer/student laptops, business solutions, gaming laptops and mobile workstations. Then there are optional choices, such as upgrades for CPU/GPU, memory, storage, displays, networking, operating systems and productivity apps. Plus, there are security and other services, warranties, accessories and peripherals.

This is all great for customers, including organizations that can effectively customize orders so that new laptops address their specific business requirements and use cases, as well as the preferences of individual users. But ensuring that those products are assembled correctly and shipped on time poses significant planning challenges.

Fulfilling customer orders typically involves dividing manufacturing processes into tasks that are assigned to specific production lines. Planning and scheduling individual tasks involve other issues, including staff availability, equipment and process status and the availability of tens of thousands of components and raw materials. Effectively managing those scenarios while avoiding known and unknowable pitfalls is what keeps factory planners up at night.

What has Lenovo done to address these challenges? Developed by Lenovo Research, the Smart Production Planning System combines AI technologies and mathematical algorithms, including emerging sequential planning algorithms based on deep reinforcement learning, into an easy-to-use optimization decision-making engine for manufacturing. The system also supports autonomous learning: The longer it runs, the smarter it gets.

The Smart Production Planning System has been deployed at Lenovos LCFC (HeFei) Electronics Technology facility, the companys largest global research and manufacturing base. The LCFCs thousands of employees fulfilled 690,000 customer orders last year, utilizing unique production processes and more than 300,000 different materials to produce over 500 specific PC products.

So how did the Smart Production Planning System do? According to Lenovo, manufacturing planning processes at the LCFC have been reduced from six hours to 90 seconds. Additionally, key performance indicators have also improved significantly. The LCFC facilitys order fulfillment rate has increased by 20 percent and productivity has increased by 18 percent.

The System also supports large-scale collaboration and multi-objective tasks, such as real-time adjustment and configuration according to users specific production objectives. LCFC workers can also set hyper-parameters on the System, such as prioritizing individual manufacturing segments production goals to keep them aligned with shifts in demand and the manufacturing environment.

According to Lenovo, the Smart Production Planning System at the LCFC is the first time in the industry that AI technology has been deployed to enhance large-scale production scheduling operations. The results are clearly positive, and it will be fascinating to see how the Systems autonomous learning capabilities impact production over time.

The solution reflects well on Lenovo Research and should continue to positively impact the companys production efficiency and bottom line. Just as importantly, as the System evolves, it should open new commercial opportunities for the companys service and solution-led efforts in vertical industries. In fact, it seems highly likely that other large-scale manufacturers will want to capture the same kinds of efficiency and performance benefits that the Smart Production Planning System is already providing to Lenovo.

Charles King is a principal analyst at PUND-IT and a regular contributor to eWEEK. He is considered one of the top 10 IT analysts in the world by Apollo Research, which evaluated 3,960 technology analysts and their individual press coverage metrics. 2020 Pund-IT, Inc. All rights reserved.

Originally posted here:

Lenovo Puts AI to Work on Production Planning | eWEEK - eWeek

Posted in Ai | Comments Off on Lenovo Puts AI to Work on Production Planning | eWEEK – eWeek

Dynatrace Recognized by AWS for Experience and Expertise in Applied AI – Business Wire

Posted: at 12:09 pm

WALTHAM, Mass.--(BUSINESS WIRE)--Software intelligence company Dynatrace (NYSE: DT) announced today it has achieved Amazon Web Services (AWS) Machine Learning Competency status in the new Applied Artificial Intelligence (Applied AI) category. This designation reflects AWSs recognition that Dynatrace has demonstrated deep experience and proven customer success building AI-powered solutions on AWS to help some of the worlds largest organizations accelerate digital transformation.

We have successfully built our cloud-native applications on AWS, and Dynatraces AI and automation ensure they are fast, efficient, and predictable, said David Priestley, Chief Digital Officer at Vitality. Dynatraces deep integrations with AWS, paired with its AI expertise, enables us to find anomalies in our applications and user journeys before they impact business outcomes. The platforms automation has enabled us to improve customer experience through faster responses to customer requests and freeing up time for our teams to innovate.

According to recent research, 86% of organizations are using cloud-native technologies, including hybrid, multicloud architectures, Kubernetes, microservices, and containers. These technologies are constantly changing. To get the most out of them at scale, and to manage constant change and reduce repetitive, manual work, digital teams need continuous automation and AI-assistance. Dynatraces AI and automation in AWS and hybrid-cloud environments delivers speed and efficiency, enabling IT, DevOps, and SRE teams to innovate faster and optimize customer experiences.

We are thrilled to be recognized by AWS for our AI and automation, and, most importantly, how our approach helps our joint customers succeed with their digital transformation strategies, said Mike Maciag, Chief Marketing Officer at Dynatrace. The Dynatrace platform delivers out-of-the-box automatic and intelligent observability, which dramatically reduces manual and repetitive tasks and accelerates results, whether that is speed and quality of innovation for development, automation and efficacy for operations, or optimization and consistency of user experiences and business outcomes.

Many companies are reinventing themselves using AWS ML and AI. We are delighted to welcome Dynatrace as an inaugural AWS Partner in our newly expanded AWS Machine Learning Competency Program, said Julien Simon, Global AI & ML Evangelist, AWS. Dynatraces innovation-focused solutions, powered and vetted by AWS, and its proven track record of helping customers, will undoubtedly help many other customers transform their business.

AI and ML-driven applications are maturing rapidly and creating new demands for enterprises. AWS is keeping pace and continuously evolving AWS Competency Programs to allow customers to engage enhanced AWS Partner technology and consulting offerings. AWS launched two new Categories within the AWS Machine Learning Competency to help customers easily and confidently identify and engage highly specialized AWS Partners with Applied AI and/or ML Ops capabilities. With this program expansion, customers will be able to go beyond the current data processing and data science platform capabilities and find experienced AWS Partners who will help productionize successful models (ML Ops) and find off-the-shelf packages for their business problems (Applied AI).

Visit the AWS website to learn more about the AWS Competency Program. Visit the Dynatrace website for an interactive experience describing how Dynatraces AI engine, Davis helps the worlds largest organizations accelerate digital transformation.

About Dynatrace

Dynatrace provides software intelligence to simplify cloud complexity and accelerate digital transformation. With automatic and intelligent observability at scale, our all-in-one platform delivers precise answers about the performance and security of applications, the underlying infrastructure, and the experience of all users to enable organizations to innovate faster, collaborate more efficiently, and deliver more value with dramatically less effort. Thats why many of the worlds largest enterprises trust Dynatrace to modernize and automate cloud operations, release better software faster, and deliver unrivaled digital experiences.

Curious to see how you can simplify your cloud? Let us show you. Visit our trial page for a free 15-day Dynatrace trial.

To learn more about how Dynatrace can help your business, visit https://www.dynatrace.com, visit our blog, and follow us on Twitter @dynatrace.

Excerpt from:

Dynatrace Recognized by AWS for Experience and Expertise in Applied AI - Business Wire

Posted in Ai | Comments Off on Dynatrace Recognized by AWS for Experience and Expertise in Applied AI – Business Wire

Page 145«..1020..144145146147..150160..»